modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-28 00:40:13
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
500 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-28 00:36:54
card
stringlengths
11
1.01M
spow12/llama-3-Korean-Bllossom-8B_ver_4.3_big_sft_2epochs
spow12
2024-05-16T00:55:33Z
2,248
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-16T00:47:20Z
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Sorour/llama3-ft-finred
Sorour
2024-05-16T00:52:30Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-16T00:46:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nsugianto/detr-resnet50_finetuned_detrresnet50_lsdocelementdetv1type7_s1_1158s
nsugianto
2024-05-16T00:51:01Z
150
0
transformers
[ "transformers", "tensorboard", "safetensors", "detr", "object-detection", "generated_from_trainer", "base_model:facebook/detr-resnet-50", "base_model:finetune:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2024-05-14T05:49:00Z
--- license: apache-2.0 base_model: facebook/detr-resnet-50 tags: - generated_from_trainer model-index: - name: detr-resnet50_finetuned_detrresnet50_lsdocelementdetv1type7_s1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet50_finetuned_detrresnet50_lsdocelementdetv1type7_s1 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.0.1 - Datasets 2.18.0 - Tokenizers 0.19.1
CyberNative-AI/Colibri_8b_v0.1_q5_gguf
CyberNative-AI
2024-05-16T00:50:09Z
4
1
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-15T23:52:46Z
GGUF q5 quant of [CyberNative-AI/Colibri_8b_v0.1](https://huggingface.co/CyberNative-AI/Colibri_8b_v0.1)
nsugianto/detr-resnet50_finetuned_detrresnet50_lsdocelementdetv1type7_v2_s2_1158s
nsugianto
2024-05-16T00:49:56Z
3
0
transformers
[ "transformers", "tensorboard", "safetensors", "detr", "object-detection", "generated_from_trainer", "endpoints_compatible", "region:us" ]
object-detection
2024-05-15T08:18:43Z
--- tags: - generated_from_trainer model-index: - name: detr-resnet50_finetuned_detrresnet50_lsdocelementdetv1type7_v2_s2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet50_finetuned_detrresnet50_lsdocelementdetv1type7_v2_s2 This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.0.1 - Datasets 2.18.0 - Tokenizers 0.19.1
Litzy619/G0515HMA13H
Litzy619
2024-05-16T00:46:04Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:google/gemma-2b", "base_model:finetune:google/gemma-2b", "license:gemma", "region:us" ]
null
2024-05-15T19:09:13Z
--- license: gemma base_model: google/gemma-2b tags: - generated_from_trainer model-index: - name: G0515HMA13H results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # G0515HMA13H This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1189 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.2098 | 0.09 | 10 | 2.8516 | | 2.5967 | 0.18 | 20 | 2.1281 | | 1.6718 | 0.27 | 30 | 1.1013 | | 0.6829 | 0.36 | 40 | 0.2924 | | 0.2107 | 0.45 | 50 | 0.1645 | | 0.1575 | 0.54 | 60 | 0.1545 | | 0.1513 | 0.63 | 70 | 0.1509 | | 0.1525 | 0.73 | 80 | 0.1491 | | 0.145 | 0.82 | 90 | 0.1503 | | 0.1484 | 0.91 | 100 | 0.1484 | | 0.15 | 1.0 | 110 | 0.1488 | | 0.1437 | 1.09 | 120 | 0.1483 | | 0.145 | 1.18 | 130 | 0.1479 | | 0.1459 | 1.27 | 140 | 0.1469 | | 0.1487 | 1.36 | 150 | 0.1454 | | 0.1418 | 1.45 | 160 | 0.1483 | | 0.1435 | 1.54 | 170 | 0.1437 | | 0.143 | 1.63 | 180 | 0.1416 | | 0.1423 | 1.72 | 190 | 0.1391 | | 0.1375 | 1.81 | 200 | 0.1352 | | 0.138 | 1.9 | 210 | 0.1349 | | 0.1345 | 1.99 | 220 | 0.1269 | | 0.1297 | 2.08 | 230 | 0.1297 | | 0.1267 | 2.18 | 240 | 0.1294 | | 0.1264 | 2.27 | 250 | 0.1276 | | 0.1255 | 2.36 | 260 | 0.1256 | | 0.1249 | 2.45 | 270 | 0.1238 | | 0.1191 | 2.54 | 280 | 0.1222 | | 0.1169 | 2.63 | 290 | 0.1207 | | 0.1163 | 2.72 | 300 | 0.1199 | | 0.1199 | 2.81 | 310 | 0.1190 | | 0.1226 | 2.9 | 320 | 0.1190 | | 0.1203 | 2.99 | 330 | 0.1189 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.0
alan314159/quantized-chatllm
alan314159
2024-05-16T00:44:00Z
0
0
null
[ "license:mit", "region:us" ]
null
2024-05-07T02:12:31Z
--- license: mit --- for use in [chatllm.cpp](https://github.com/foldl/chatllm.cpp)
khkeni/team9Model
khkeni
2024-05-16T00:35:53Z
104
0
transformers
[ "transformers", "safetensors", "electra", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-16T00:00:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Scherzando/bertm-fashion-6ep-wd0x2
Scherzando
2024-05-16T00:35:45Z
4
0
transformers
[ "transformers", "jax", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-16T00:35:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
domenicrosati/repnoise_0.001beta_attacked_3e-4
domenicrosati
2024-05-16T00:34:25Z
4
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-16T00:31:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
YYYYYYibo/nash_dpo_rank4_on_vanilla_iter_2
YYYYYYibo
2024-05-16T00:33:23Z
0
0
peft
[ "peft", "safetensors", "mistral", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "dataset:updated", "dataset:original", "base_model:alignment-handbook/zephyr-7b-sft-full", "base_model:adapter:alignment-handbook/zephyr-7b-sft-full", "license:apache-2.0", "region:us" ]
null
2024-05-15T22:35:52Z
--- license: apache-2.0 library_name: peft tags: - alignment-handbook - generated_from_trainer - trl - dpo base_model: alignment-handbook/zephyr-7b-sft-full datasets: - updated - original model-index: - name: nash_dpo_rank4_on_vanilla_iter_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nash_dpo_rank4_on_vanilla_iter_2 This model is a fine-tuned version of [YYYYYYibo/nash_dpo_rank4_on_vanilla_iter_1](https://huggingface.co/YYYYYYibo/nash_dpo_rank4_on_vanilla_iter_1) on the updated and the original datasets. It achieves the following results on the evaluation set: - Loss: 0.6580 - Rewards/chosen: -0.1638 - Rewards/rejected: -0.2696 - Rewards/accuracies: 0.5920 - Rewards/margins: 0.1058 - Logps/rejected: -289.0013 - Logps/chosen: -304.2577 - Logits/rejected: -2.3485 - Logits/chosen: -2.4282 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.5679 | 0.64 | 100 | 0.6580 | -0.1638 | -0.2696 | 0.5920 | 0.1058 | -289.0013 | -304.2577 | -2.3485 | -2.4282 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.3.0+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
ClaudioItaly/JackRussell-Q4_K_M-GGUF
ClaudioItaly
2024-05-16T00:32:10Z
2
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:OpenPipe/mistral-ft-optimized-1218", "base_model:merge:OpenPipe/mistral-ft-optimized-1218", "base_model:mlabonne/NeuralHermes-2.5-Mistral-7B", "base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B", "endpoints_compatible", "region:us" ]
null
2024-05-16T00:31:55Z
--- library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo base_model: - OpenPipe/mistral-ft-optimized-1218 - mlabonne/NeuralHermes-2.5-Mistral-7B --- # ClaudioItaly/JackRussell-Q4_K_M-GGUF This model was converted to GGUF format from [`mergekit-community/JackRussell`](https://huggingface.co/mergekit-community/JackRussell) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/mergekit-community/JackRussell) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo ClaudioItaly/JackRussell-Q4_K_M-GGUF --model jackrussell.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo ClaudioItaly/JackRussell-Q4_K_M-GGUF --model jackrussell.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m jackrussell.Q4_K_M.gguf -n 128 ```
Scherzando/bertm-fashion-12ep-wd0x1
Scherzando
2024-05-16T00:13:13Z
4
0
transformers
[ "transformers", "jax", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-16T00:13:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tuquyennnn/Bart-base-v1
tuquyennnn
2024-05-16T00:07:50Z
111
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-base", "base_model:finetune:facebook/bart-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-16T00:07:29Z
--- license: apache-2.0 base_model: facebook/bart-base tags: - generated_from_trainer model-index: - name: Bart-base-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Bart-base-v1 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0391 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0951 | 0.32 | 250 | 0.0443 | | 0.051 | 0.64 | 500 | 0.0424 | | 0.0493 | 0.96 | 750 | 0.0409 | | 0.0441 | 1.28 | 1000 | 0.0401 | | 0.0428 | 1.61 | 1250 | 0.0399 | | 0.0426 | 1.93 | 1500 | 0.0393 | | 0.0394 | 2.25 | 1750 | 0.0397 | | 0.0382 | 2.57 | 2000 | 0.0390 | | 0.038 | 2.89 | 2250 | 0.0389 | | 0.0356 | 3.21 | 2500 | 0.0393 | | 0.0347 | 3.53 | 2750 | 0.0392 | | 0.0343 | 3.85 | 3000 | 0.0390 | | 0.0331 | 4.17 | 3250 | 0.0392 | | 0.0313 | 4.49 | 3500 | 0.0393 | | 0.0319 | 4.82 | 3750 | 0.0390 | | 0.0307 | 5.14 | 4000 | 0.0392 | | 0.0296 | 5.46 | 4250 | 0.0394 | | 0.0297 | 5.78 | 4500 | 0.0391 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.19.1 - Tokenizers 0.15.2
engineersakibcse47/NER_on_Bangla_Language
engineersakibcse47
2024-05-16T00:05:54Z
109
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "bn", "dataset:wikiann", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-10T13:45:34Z
--- language: bn datasets: - wikiann examples: widget: - text: "আমি, সাকিব হোসেন হিমেল, ডাটা সায়েন্সে স্নাতকোত্তর করছি, বর্তমানে জার্মানির বার্লিনে থাকি, গত বছর বাংলাদেশ থেকে এসেছি।" example_title: "Sentence_1" - text: "হোর্হেলুইস বোর্হেস" example_title: "Sentence_2" - text: "বাংলাদেশ জাতীয় ক্রিকেট দল" example_title: "Sentence_3" - text: "কুড়িগ্রাম উপজেলা" example_title: "Sentence_4" - text: "লিওনার্দো দা ভিঞ্চি" example_title: "Sentence_5" - text: "রিয়াল মাদ্রিদ ফুটবল ক্লাব" example_title: "Sentence_6" --- <h1>Named Entity Recognition on Bangla Language</h1> Fine Tuning BERT for NER on Bengali Language Tagging using HuggingFace ## Correspondence Label ID and Label Name | Label ID | Label Name| | -------- | ----- | |0 | O | | 1 | B-PER | | 2 | I-PER | | 3 | B-ORG| | 4 | I-ORG | | 5 | B-LOC | | 6 | I-LOC | <h1>Evaluation and Validation</h1> | Name | Precision | Recall | F1 | Accuracy | | ---- | -------- | ----- | ---- | ---- | | Train/Val set | 0.963899 | 0.964770 | 0.964334 | 0.981252 | | Test set | 0.952855 | 0.965105 | 0.958941 | 0.981349 | Transformers AutoModelForTokenClassification ```py from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("engineersakibcse47/NER_on_Bangla_Language") model_ner = AutoModelForTokenClassification.from_pretrained("engineersakibcse47/NER_on_Bangla_Language") pipe = pipeline("ner", model=model_ner, tokenizer=tokenizer, aggregation_strategy="simple") sample = "বসনিয়া ও হার্জেগোভিনা" result = pipe(sample) result ```
ashekhar1976/finetuned_qlora_llama3_instruct
ashekhar1976
2024-05-16T00:04:52Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-08T03:16:12Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** ashekhar1976 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
saaduddinM/Mamba2.8B_woCOT
saaduddinM
2024-05-16T00:03:48Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-14T04:21:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
abc88767/4sc74
abc88767
2024-05-15T23:48:15Z
134
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-15T23:46:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
thorirhrafn/llama_SFT_e1_DPO_e1
thorirhrafn
2024-05-15T23:48:13Z
2
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2024-05-15T22:41:37Z
--- license: llama2 library_name: peft tags: - trl - dpo - generated_from_trainer base_model: meta-llama/Llama-2-7b-hf model-index: - name: llama_SFT_e1_DPO_e1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama_SFT_e1_DPO_e1 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1876 - Rewards/chosen: 0.3221 - Rewards/rejected: -1.3485 - Rewards/accuracies: 1.0 - Rewards/margins: 1.6706 - Logps/rejected: -199.1326 - Logps/chosen: -156.6435 - Logits/rejected: -1.0544 - Logits/chosen: -0.8650 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6715 | 0.1 | 25 | 0.6332 | 0.0386 | -0.0871 | 0.9333 | 0.1257 | -186.5185 | -159.4784 | -1.0533 | -0.8570 | | 0.5507 | 0.2 | 50 | 0.5213 | 0.1021 | -0.2851 | 1.0 | 0.3872 | -188.4984 | -158.8435 | -1.0540 | -0.8579 | | 0.4521 | 0.3 | 75 | 0.4180 | 0.1622 | -0.5141 | 1.0 | 0.6763 | -190.7885 | -158.2424 | -1.0548 | -0.8606 | | 0.3675 | 0.4 | 100 | 0.3332 | 0.2182 | -0.7466 | 1.0 | 0.9647 | -193.1132 | -157.6828 | -1.0545 | -0.8611 | | 0.3149 | 0.5 | 125 | 0.2724 | 0.2574 | -0.9589 | 1.0 | 1.2164 | -195.2370 | -157.2902 | -1.0544 | -0.8631 | | 0.2486 | 0.6 | 150 | 0.2247 | 0.2948 | -1.1593 | 1.0 | 1.4541 | -197.2406 | -156.9163 | -1.0550 | -0.8663 | | 0.2173 | 0.7 | 175 | 0.1966 | 0.3176 | -1.2962 | 1.0 | 1.6138 | -198.6099 | -156.6887 | -1.0553 | -0.8673 | | 0.1971 | 0.79 | 200 | 0.1878 | 0.3231 | -1.3461 | 1.0 | 1.6692 | -199.1087 | -156.6337 | -1.0542 | -0.8665 | | 0.1869 | 0.89 | 225 | 0.1869 | 0.3210 | -1.3535 | 1.0 | 1.6745 | -199.1825 | -156.6541 | -1.0546 | -0.8626 | | 0.1911 | 0.99 | 250 | 0.1876 | 0.3221 | -1.3485 | 1.0 | 1.6706 | -199.1326 | -156.6435 | -1.0544 | -0.8650 | ### Framework versions - PEFT 0.8.2 - Transformers 4.38.1 - Pytorch 2.2.0+cu118 - Datasets 2.17.1 - Tokenizers 0.15.2
LegoDeGamer/NathanielsShow
LegoDeGamer
2024-05-15T23:47:50Z
0
0
null
[ "arxiv:1910.09700", "region:us" ]
null
2024-05-15T22:29:48Z
--- license: openrail language: - en You did not need to read this # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/MediaTek-Research_-_Breeze-7B-Instruct-v0_1-4bits
RichardErkhov
2024-05-15T23:47:19Z
76
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:2403.02712", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-15T23:43:37Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Breeze-7B-Instruct-v0_1 - bnb 4bits - Model creator: https://huggingface.co/MediaTek-Research/ - Original model: https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0_1/ Original model description: --- pipeline_tag: text-generation license: apache-2.0 language: - zh - en --- # Model Card for MediaTek Research Breeze-7B-Instruct-v0_1 MediaTek Research Breeze-7B (hereinafter referred to as Breeze-7B) is a language model family that builds on top of [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1), specifically intended for Traditional Chinese use. [Breeze-7B-Base](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0_1) is the base model for the Breeze-7B series. It is suitable for use if you have substantial fine-tuning data to tune it for your specific use case. [Breeze-7B-Instruct](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0_1) derives from the base model Breeze-7B-Base, making the resulting model amenable to be used as-is for commonly seen tasks. [Breeze-7B-Instruct-64k](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0_1) is a slightly modified version of Breeze-7B-Instruct to enable a 64k-token context length. Roughly speaking, that is equivalent to 88k Traditional Chinese characters. *Update (Feb. 21st, 2024): Breeze-7B-Instruct-64k-v0_1 has been temporarily removed from Hugging Face due to its actual performance in long context tests not meeting expectations.* *Update (Mar. 7th, 2024): The current release version of Breeze-7B is v1.0. See [Breeze-7B-Instruct-v1_0](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0).* The current release version of Breeze-7B is v0.1. Practicality-wise: - Breeze-7B-Base expands the original vocabulary with additional 30,000 Traditional Chinese tokens. With the expanded vocabulary, everything else being equal, Breeze-7B operates at twice the inference speed for Traditional Chinese to Mistral-7B and Llama 7B. [See [Inference Performance](#inference-performance).] - Breeze-7B-Instruct can be used as is for common tasks such as Q&A, RAG, multi-round chat, and summarization. - In particular, Breeze-7B-Instruct-64k can perform tasks at a document level, not a chapter level. Performance-wise: - Breeze-7B-Instruct demonstrates impressive performance in benchmarks for Traditional Chinese and English, when compared to similar sized open-source contemporaries such as Taiwan-LLM-7B/13B-chat, QWen-7B-Chat, and Yi-6B-Chat. [See [Chat Model Performance](#chat-model-performance).] *A project by the members (in alphabetical order): Chan-Jan Hsu 許湛然, Chang-Le Liu 劉昶樂, Feng-Ting Liao 廖峰挺, Po-Chun Hsu 許博竣, Yi-Chang Chen 陳宜昌, and the supervisor Da-Shan Shiu 許大山.* ## Features - Breeze-7B-Base-v0_1 - Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese - 8k-token context length - Breeze-7B-Instruct-v0_1 - Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese - 8k-token context length - Multi-turn dialogue (without special handling for harmfulness) - Breeze-7B-Instruct-64k-v0_1 - Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese - 64k-token context length - Multi-turn dialogue (without special handling for harmfulness) ## Model Details - Breeze-7B-Base-v0_1 - Finetuned from: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) - Model type: Causal decoder-only transformer language model - Language: English and Traditional Chinese (zh-tw) - Breeze-7B-Instruct-v0_1 - Finetuned from: [MediaTek-Research/Breeze-7B-Base-v0_1](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0_1) - Model type: Causal decoder-only transformer language model - Language: English and Traditional Chinese (zh-tw) - Breeze-7B-Instruct-64k-v0_1 - Finetuned from: [MediaTek-Research/Breeze-7B-Instruct-v0_1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0_1) - Model type: Causal decoder-only transformer language model - Language: English and Traditional Chinese (zh-tw) ## Base Model Performance **TMMLU+**, **DRCD**, and **Table** source from [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2). [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2) derives from [TCEval-v1](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval) and [ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus). **MMLU** sources from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train). We use the code revised from [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate **TMMLU+**, **DRCD**, **Table**, and **MMLU**. All choice problems adapt the selection by the log-likelihood. | Models | |↑ TMMLU+ (ACC) | DRCD (EM) | Table (ACC) | MMLU (ACC) | |----------------------------------------------|--------|--------------|-------------|-------------|------------| | | |TC, Knowledge |TC, Reasoning|TC, Reasoning|EN, Knowledge| | | | 5 shot | 3 shot | 5 shot | 5 shot | | [Yi-34B](https://huggingface.co/01-ai/Yi-34B)| 34B | 63.10 | 84.57 | 49.31 | 77.42 | | [Qwen-14B](https://huggingface.co/01-ai/Qwen/Qwen-14B)| 14B | 51.30 | 16.95 * | 50.69 | 68.83 | | [Yi-6B](https://huggingface.co/01-ai/Yi-6B) | 6B | 49.63 | 76.61 | 34.72 | 65.35 | | [Qwen-7B](https://huggingface.co/01-ai/Qwen/Qwen-7B)| 7B | 42.84 | 0.0 * | 39.58 | 61.00 | | [**Breeze-7B-Base-v0_1**](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0_1) | 7B | 40.35 | 81.13 | 28.47 | 61.63 | | [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)| 7B | 36.93 | 79.27 | 27.78 | 64.89 | \* Few-shot learning cannot effectively guide the model to generate the proper answer. ## Chat Model Performance **TMMLU+**, **DRCD**, **Table**, and **MT-Bench-tw** source from [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2). [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2) derives from [TCEval-v1](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval) and [ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus). **MMLU** sources from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train). **MT-Bench** source from [lmsys/mt_bench_human_judgments](https://huggingface.co/datasets/lmsys/mt_bench_human_judgments). We use the code revised from [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate **TMMLU+**, **DRCD**, **Table**, and **MMLU**. All choice problems adapt the selection by the log-likelihood. We use the code revised from [fastchat llm_judge](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) (GPT4 as judge) to evaluate **MT-Bench-tw** and **MT-Bench**. | Models | |↑ MT-Bench-tw (Score)| TMMLU+ (ACC) | TMMLU+ (ACC) | DRCD (EM) | Table (ACC) | MT-Bench (Score) | MMLU (ACC) | MMLU (ACC) | |---------------------------------------------------------------------------------------------------------|--------|--------------------|--------------|--------------|-------------|-------------|------------------|-------------|-------------| | | |TC, Chat |TC, Knowledge |TC, Knowledge |TC, Reasoning|TC, Reasoning|EN, Chat |EN, Knowledge|EN, Knowledge| | | |0 shot | 0 shot | 5 shot | 3 shot | 0 shot |0 shot | 0 shot | 5 shot | | [gpt-3.5-turbo](https://openai.com) | |7.1 | 43.56 | | | 45.14 |7.9 | 67.09 | | | [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat) | 34B |6.9 | 54.87 | | | 36.81 |7.6 | 71.04 | | | [Qwen-14B-Chat](https://huggingface.co/Qwen/Qwen-14B-Chat) | 14B |6.4 | 48.41 | | | 41.67 |7.2 | 64.91 | | | [**Breeze-7B-Instruct-v0_1**](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0_1) | 7B |5.7 | 41.61 | | | 45.83 |7.1 | 63.26 | | | [**Breeze-7B-Instruct-64k-v0_1**](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0_1) | 7B |5.5 | 40.99 | | | 36.11 |7.1 | 63.68 | | | [Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) | 7B |5.4 | 40.02 | | | 33.33 |6.2 | 55.94 | | | [Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat) | 6B |5.0 | 44.79 | | | 25.69 |6.0 | 59.45 | | | [Taiwan-LLM-13B-v2.0-chat](https://huggingface.co/yentinglin/Taiwan-LLM-13B-v2.0-chat) | 13B |5.0 | 29.47 | | | 23.61 |-* | 50.50 | | | [Taiwan-LLM-7B-v2.1-chat](https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.1-chat) | 7B |4.2 | 28.08 | | | 31.25 | -* | 42.72 | | \* Taiwan-LLM models responds to multi-turn questions (English) in Traditional Chinese. | Details on MT-Bench-tw (0 shot):<br/>Models | STEM |Extraction|Reasoning| Math | Coding | Roleplay| Writing |Humanities|↑ AVG | |-----------------------------------------------------|---------|---------|---------|---------|---------|---------|---------|---------|---------| | gpt-3.5-turbo | 7.8 | 6.1 | 5.1 | 6.4 | 6.2 | 8.7 | 7.4 | 9.3 | 7.1 | | Yi-34B-Chat | 9.0 | 4.8 | 5.7 | 4.0 | 4.7 | 8.5 | 8.7 | 9.8 | 6.9 | | Qwen-14B-Chat | 7.6 | 5.7 | 4.5 | 4.2 | 5.3 | 7.5 | 7.3 | 9.1 | 6.4 | | **Breeze-7B-Instruct-v0_1** | 6.5 | 5.6 | 3.9 | 3.6 | 4.3 | 6.9 | 5.7 | 9.3 | 5.7 | | **Breeze-7B-Instruct-64k-v0_1** | 6.1 | 5.3 | 3.7 | 2.9 | 4.2 | 7.0 | 6.7 | 8.3 | 5.5 | | Qwen-7B-Chat | 6.6 | 4.5 | 4.8 | 2.9 | 3.6 | 6.2 | 6.8 | 8.2 | 5.4 | | Yi-6B-Chat | 7.3 | 2.7 | 3.1 | 3.3 | 2.3 | 7.2 | 5.2 | 8.8 | 5.0 | | Taiwan-LLM-13B-v2.0-chat | 6.1 | 3.4 | 4.1 | 2.3 | 3.1 | 7.4 | 6.6 | 6.8 | 5.0 | | Taiwan-LLM-7B-v2.1-chat | 5.2 | 2.6 | 2.3 | 1.2 | 3.4 | 6.6 | 5.7 | 6.8 | 4.2 | | Details on TMMLU+ (0 shot):<br/>Model | STEM | Social Science | Humanities | Other | ↑ AVG | |-----------------------------------------------------|--------------|----------------|------------|------------|---------| | Yi-34B-Chat | 47.65 | 64.25 | 52.73 | 54.91 | 54.87 | | Qwen-14B-Chat | 43.83 | 55.00 | 48.55 | 46.22 | 48.41 | | Yi-6B-Chat | 37.80 | 51.74 | 45.36 | 44.25 | 44.79 | | gpt-3.5-turbo | 41.58 | 48.52 | 40.96 | 43.18 | 43.56 | | **Breeze-7B-Instruct-v0_1** | 37.41 | 46.81 | 42.06 | 40.16 | 41.61 | | **Breeze-7B-Instruct-64k-v0_1** | 37.88 | 46.35 | 40.31 | 39.40 | 40.99 | | Qwen-7B-Chat | 35.44 | 46.22 | 38.35 | 40.06 | 40.02 | | Taiwan-LLM-13B-v2.0-chat | 27.74 | 33.69 | 27.03 | 29.43 | 29.47 | | Taiwan-LLM-7B-v2.1-chat | 25.58 | 31.76 | 27.36 | 27.61 | 28.08 | ## Inference Performance In this test, we use the first 700 characters of the [web article](https://health.udn.com/health/story/5976/7699252?from=udn_ch1005_main_index) as the input and ask the model to write the same article again. All inferences run on 2 RTX A6000 GPUs (using `vllm`, with a tensor-parallel size of 2). | Models | ↓ Inference Time (sec)|Estimated Max Input Length (Char)| |--------------------------------------------------------------------|-------------------|--------------------------| | Yi-6B-Chat | 10.62 | 5.2k | | **Breeze-7B-Instruct-v0_1** | 10.74 | 11.1k | | **Breeze-7B-Instruct-64k-v0_1** | 10.74 | 88.8k | | Qwen-7B-Chat | 10.86 | 9.8k | | Qwen-14B-Chat | 18.89 | 9.8k | | Mistral-7B-v0.1-Instruct | 20.48 | 5.1k | | Taiwan-LLM-7B-v2.1-chat | 26.26 | 2.2k | | Taiwan-LLM-13B-v2.0-chat | 36.80 | 2.2k | | Yi-34B-Chat | 43.71 | 4.5k | ## Long-context Performance TBD ## Use in Transformers First install direct dependencies: ``` pip install transformers torch accelerate ``` If you want faster inference using flash-attention2, you need to install these dependencies: ```bash pip install packaging ninja pip install flash-attn ``` Then load the model in transformers: ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model = AutoModelForCausalLM.from_pretrained( "MediaTek-Research/Breeze-7B-Instruct-v0_1", device_map="auto", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2" # optional ) ``` The structure of the query is ```txt <s>SYS_PROMPT [INST] QUERY1 [/INST] RESPONSE1 [INST] QUERY2 [/INST] ``` where `SYS_PROMPT`, `QUERY1`, `RESPONSE1`, and `QUERY2` can be provided by the user. The suggested default `SYS_PROMPT` is ```txt You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan. ``` We also integrate `chat_template` into [tokenizer_config.json](tokenizer_config.json), so you can `apply_chat_template` to get the prompt. ```python >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("MediaTek-Research/Breeze-7B-Instruct-v0_1") >>> chat = [ ... {"role": "user", "content": "你好,請問你可以完成什麼任務?"}, ... {"role": "assistant", "content": "你好,我可以幫助您解決各種問題、提供資訊和協助您完成許多不同的任務。例如:回答技術問題、提供建議、翻譯文字、尋找資料或協助您安排行程等。請告訴我如何能幫助您。"}, ... {"role": "user", "content": "太棒了!"}, ... ] >>> tokenizer.apply_chat_template(chat, tokenize=False) "<s>You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan. [INST] 你好,請問你可以完成什麼任務? [/INST] 你好,我可以幫助您解決各種問題、提供資訊和協助您完成許多不同的任務。例如:回答技術問題、提供建議、翻譯文字、尋找資料或協助您安排行程等。請告訴我如何能幫助您。 [INST] 太棒了! [/INST] " # Tokenized results # ['▁', '你好', ',', '請問', '你', '可以', '完成', '什麼', '任務', '?'] # ['▁', '你好', ',', '我', '可以', '幫助', '您', '解決', '各種', '問題', '、', '提供', '資訊', '和', '協助', '您', '完成', '許多', '不同', '的', '任務', '。', '例如', ':', '回答', '技術', '問題', '、', '提供', '建議', '、', '翻譯', '文字', '、', '尋找', '資料', '或', '協助', '您', '安排', '行程', '等', '。', '請', '告訴', '我', '如何', '能', '幫助', '您', '。'] # ['▁', '太', '棒', '了', '!'] ``` ## Citation ``` @article{MediaTek-Research2024breeze7b, title={Breeze-7B Technical Report}, author={Chan-Jan Hsu and Chang-Le Liu and Feng-Ting Liao and Po-Chun Hsu and Yi-Chang Chen and Da-Shan Shiu}, year={2024}, eprint={2403.02712}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
hui168/ppo-CartPole-v1
hui168
2024-05-15T23:46:14Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-05-15T17:34:52Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: ppo-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 200000.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
elinas/Llama-3-13B-Instruct-ft
elinas
2024-05-15T23:45:46Z
10
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "dataset:Chat-Error/Pure-dove-sharegpt", "base_model:elinas/Llama-3-13B-Instruct", "base_model:finetune:elinas/Llama-3-13B-Instruct", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-12T23:29:39Z
--- base_model: - elinas/Llama-3-13B-Instruct library_name: transformers tags: - mergekit - merge datasets: - Chat-Error/Pure-dove-sharegpt license: llama3 --- # Llama-3-13B-Instruct-ft This is a QLoRA **finetune** of a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). The model is based on my passthrough merge of [Llama-3-13B-Instruct](https://huggingface.co/elinas/Llama-3-13B-Instruct) This was primarily an experiment to see how a passthrough merge will respond to further finetuning, though this was done on a small dataset. The goal was to make a "mid" sized model like Meta has released in the past and the merge method was inspired by [mlabonne's Llama-3-120B](https://huggingface.co/mlabonne/Meta-Llama-3-120B-Instruct). The model was finetuned on **8192 context length** and is likely reliable using RoPE up to 32k. It still cannot do math reliably; neither can Llama-3-8B, and in my tests only Llama-3-70B passes basic arithmetic, but it is a better storywriter/RP than Llama-3-8B from some side by side testing I conducted. Further finetuning this model or finetuning the [base model](https://huggingface.co/elinas/Llama-3-13B-Instruct) on more samples is encouraged. ## Datasets * [Chat-Error/Pure-dove-sharegpt](https://huggingface.co/datasets/Chat-Error/Pure-dove-sharegpt) A small dataset was used to see how it affects performance. Originally I planned to do a larger dataset (196k samples), but wanted to start with a smaller one first to see how much the model improved with some additional finetuning. Next steps would be finetuning on a larger dataset if through further testing, performance improvements are noticed. ## Finetuning details This is a QLoRA model and all modules were targeted. ```yaml lora_target_modules: - gate_proj - down_proj - up_proj - q_proj - v_proj - k_proj - o_proj lora_modules_to_save: - embed_tokens - lm_head ``` ```yaml The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 3 - total_train_batch_size: 3 - total_eval_batch_size: 3 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 25 - num_epochs: 1 ``` Optimizer `paged_adamw_8bit` and Deepspeed ZeRO 3 was used at a LR of `1e-5` using the cosine scheduler for 1 epoch on 3x3090s taking 4h 12m 13s total. Sample packing and padding was disabled to reduce VRAM consumption significantly at the cost of speed. W&B Run Summary ``` wandb: Run summary: wandb: eval/loss 1.00774 wandb: eval/runtime 535.3847 wandb: eval/samples_per_second 0.721 wandb: eval/steps_per_second 0.241 wandb: total_flos 4167452590080.0 wandb: train/epoch 1.0 wandb: train/global_step 1157 wandb: train/grad_norm 4.50846 wandb: train/learning_rate 0.0 wandb: train/loss 1.4115 wandb: train_loss 1.00352 wandb: train_runtime 14921.1227 wandb: train_samples_per_second 0.233 wandb: train_steps_per_second 0.078 ``` ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0.dev0 - Pytorch 2.3.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0 ## Model Evaluation TBD - submitted If you have any questions or comments on the model, feel free to open a discussion in the community tab. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
Treza12/Falcon
Treza12
2024-05-15T23:43:07Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-13T11:55:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Blib-la/blibla_art_style
Blib-la
2024-05-15T23:42:32Z
5
5
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:cc-by-nc-nd-4.0", "region:us" ]
text-to-image
2024-05-13T10:05:15Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: >- hyperrealism illustration of a south east asian Curious 10yo male with Ash blonde hair, person wearing Sweatpants, detailed facial features, at River, upper body pose, highres digital art, 4k resolution, best quality, faded colors parameters: negative_prompt: >- worst quality, lowres, blurry, deformed, overexposure, bright, hands, oversaturated, burned, oversharpened, artifacts output: url: images/00389-1195561389.png - text: >- ink splatter art illustration of a caucasian 25yo male with Brown hair, person wearing Vest, detailed facial features, upper body pose, highres digital art, 4k resolution, best quality, soft colors parameters: negative_prompt: >- worst quality, lowres, blurry, deformed, overexposure, bright, hands, oversaturated, burned, oversharpened, artifacts output: url: images/00387-1832325989.png - text: >- comic illustration of a Saudi Regretful 16yo female with Blunt haircut, person wearing Leggings, detailed facial features, at nano-tech clinic receiving advanced medical treatments, full body pose, highres digital art, 4k resolution, best quality, partial colors parameters: negative_prompt: >- worst quality, lowres, blurry, deformed, overexposure, bright, hands, oversaturated, burned, oversharpened, artifacts output: url: images/00384-628900932.png - text: >- storybook illustration of a tribal Determined 20yo male with Bowl cut, person wearing Peplum top, detailed facial features, at Castle, full body pose, highres digital art, 4k resolution, best quality, minimal colors parameters: negative_prompt: >- worst quality, lowres, blurry, deformed, overexposure, bright, hands, oversaturated, burned, oversharpened, artifacts output: url: images/00377-3222748076.png - text: >- character concept art illustration of a scandinavian 22yo male with Ombre hair, person wearing Dirndl, detailed facial features, at Volcano of Reversed Time, full body pose, highres digital art, 4k resolution, best quality, partial colors parameters: negative_prompt: >- worst quality, lowres, blurry, deformed, overexposure, bright, hands, oversaturated, burned, oversharpened, artifacts output: url: images/00374-2297050401.png - text: >- modern cartoon illustration of a asian 30yo male with Blonde highlights hair, person wearing Cardigan, detailed facial features, upper body pose, highres digital art, 4k resolution, best quality, partial colors parameters: negative_prompt: >- worst quality, lowres, blurry, deformed, overexposure, bright, hands, oversaturated, burned, oversharpened, artifacts output: url: images/00371-3881824019.png - text: >- fine art illustration of a pale skinned 15yo male with Classic side sweep, person wearing Pilot's uniform, detailed facial features, at Farm, upper body pose, highres digital art, 4k resolution, best quality, minimal colors parameters: negative_prompt: >- worst quality, lowres, blurry, deformed, overexposure, bright, hands, oversaturated, burned, oversharpened, artifacts output: url: images/00367-3204960566.png - text: >- comic illustration of a Colombian Cheerful 19yo male with Green hair, person wearing Henley shirt, detailed facial features, at Nursery, highres digital art, 4k resolution, best quality, minimal colors parameters: negative_prompt: >- worst quality, lowres, blurry, deformed, overexposure, bright, hands, oversaturated, burned, oversharpened, artifacts output: url: images/00300-419054234.png - text: >- ink splatter art illustration of a asian 45yo male with Chestnut hair, person wearing Hijab, detailed facial features, at Harbor, highres digital art, 4k resolution, best quality, vibrant colors parameters: negative_prompt: >- worst quality, lowres, blurry, deformed, overexposure, bright, hands, oversaturated, burned, oversharpened, artifacts output: url: images/00344-3276914152.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: null license: cc-by-nc-nd-4.0 --- # Blibla Art Style ## Captain: The AI platform that evolves to your needs * 🚀 [Check out Captain](https://get-captain.com) * 👩‍💻 [Captain on GitHub](https://github.com/blib-la/captain) <Gallery /> ## Model Description This model is a sophisticated blend of weights from various ethically trained LoRAs. It combines the unique styles of different artists and caricature elements, providing a diverse and powerful tool for generative tasks. The base models included in this merge are: - **Honore Daumier**: [Model Link](https://huggingface.co/Blib-la/honore_daumier_lora_sdxl) - **Thomas Rowlandson**: [Model Link](https://huggingface.co/Blib-la/thomas_rowlandson_lora_sdxl) - **Caricature**: [Model Link](https://huggingface.co/Blib-la/caricature_lora_sdxl) Additionally, we have incorporated preliminary weights from our research models that feature artistic styles by fictional artists "Quinn Storm" and "Akin Maudok." These models have been trained with the same ethical guidelines and are intended for non-commercial use upon release. ## Merging Process The merging process involved combining several models with varying weights on the Stable Diffusion XL (SDXL) architecture. The weights distribution resulted in an overall LoRA weight of 1.6. From this, we extracted a LoRA with Network Rank 256 and Network Alpha 128, yielding highly effective results in style and detail fidelity. ## Intended Use This model is designed for use in academic research and creative projects where ethical AI practices are upheld. It is ideal for those looking to explore artistic synthesis and generative art, especially in settings that respect and recognize the contributions of artists and AI ethics. ## Ethical Considerations All base models used in this merger have been developed under strict ethical guidelines, with an emphasis on reducing bias and respecting artistic integrity. The addition of weights from fictional artists aims to explore the creative potential of AI without misattributing or replicating the work of real individuals. > ℹ️ **Dataset Information**: The datasets used to train the base models are available for review and use. Detailed information and access to these datasets can be found on the individual model pages of each base LoRA linked above. ## Limitations While the model performs robustly across a variety of artistic styles, it may not perfectly emulate the nuances of each base model’s training data. Users are encouraged to experiment with different settings and provide feedback for future improvements. ## Licensing - **Model License**: Licensed under Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International (CC BY-NC-ND 4.0) for non-commercial use. ## Download model Weights for this model are available in Safetensors format. [Download](/Blib-la/blibla_art_style/tree/main) them in the Files & versions tab. ## Related https://blib.la/blog/crafting-the-future-blibla-s-ethical-approach-to-ai-model-training ## Additional Usage Restrictions for Blibla's LoRAs Hosted on Hugging Face In alignment with our commitment to ensuring the responsible and ethical use of our models, and in addition to the terms set forth in the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0) under which Blibla's LoRAs are licensed, we hereby impose the following specific restrictions: 1. **Prohibited Platforms**: Reuploading, redistributing, or offering of image generation services using our models on platforms not owned or operated by Blibla or Hugging Face is strictly forbidden. This includes, but is not limited to, any platforms that host, allow, or promote Not Safe For Work (NSFW) content. 2. **Explicitly Forbidden Platforms**: For clarity, and without limiting the generality of the above, platforms including but not limited to Leonardo AI, Civit AI, and any "Hugging Face spaces" that host or permit NSFW content are explicitly prohibited from hosting, or utilizing Blibla's LoRAs in any form or manner. 3. **Responsibility of Users**: Users of Blibla's LoRAs are responsible for ensuring that the environments in which they use, share, or promote our models adhere strictly to these restrictions. Violation of these terms may result in revocation of the license granted under CC BY-NC-ND 4.0 and may prompt further legal action to protect the integrity of our models and the safety of the communities we serve. 4. **Purpose of Restrictions**: These restrictions are put in place to align with Blibla's ethical standards and the intended use of our models. They are designed to prevent associations with content or platforms that do not reflect our values or the intended application of our technology. By utilizing Blibla's LoRAs, you acknowledge and agree to these additional restrictions, ensuring that the use of our models remains within the bounds of ethical and responsible practice.
Blib-la/soda_can_lora_sdxl
Blib-la
2024-05-15T23:42:13Z
8
4
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "dataset:Blib-la/soda-cans", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:cc-by-nc-nd-4.0", "region:us" ]
text-to-image
2024-02-23T13:43:27Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: >- a can of "BLIBLA", with exciting colors, rusted and dim lighting <lora:soda_can_sdxl_v1:1> output: url: images/00007-4104990849.png - text: >- a can of "BLIBLA", with vibrant colors, plain background and optimal lighting <lora:soda_can_sdxl_v1:1> output: url: images/00032-388743413.png - text: >- a can of "Acid Rain", with exciting colors, green background and bright lighting <lora:soda_can_sdxl_v1:1> output: url: images/00069-2760974221.png - text: >- a can of "Crazy Juice", with exciting colors, condensation and dim lighting <lora:soda_can_sdxl_v1:1> output: url: images/00091-2669279066.png - text: >- a can of "SDXL", with exciting colors, rusted and colorful lighting <lora:soda_can_sdxl_v1:1> output: url: images/00089-2669279064.png - text: >- a can of "Stable Diffusion", with vibrant colors, plain background and dim lighting <lora:soda_can_sdxl_v1:1> output: url: images/00131-4140798924.png - text: >- a can of "troll juice", with pale colors, intricate design and colorful lighting <lora:soda_can_sdxl_v1:1> output: url: images/00129-4140798922.png - text: >- a can of "unicorn puke", with vibrant colors, intricate design and neon lighting <lora:soda_can_sdxl_v1:1> output: url: images/00170-3311476337.png - text: >- a can of "Meat Juice", with pale colors, comic graphics and professional lighting <lora:soda_can_sdxl_v1:1> output: url: images/00341-4049900463.png - text: >- a can of "Egg White", with pale colors, comic graphics and dark lighting <lora:soda_can_sdxl_v1:1> output: url: images/00344-2643367320.png - text: >- a can of "fish tank", with vibrant colors, intricate design and optimal lighting <lora:soda_can_sdxl_v1:1> output: url: images/00345-2643367321.png - text: >- a can of "Teddy Beer", with pale colors, crushed and colorful lighting <lora:soda_can_sdxl_v1:1> output: url: images/00373-2701734220.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a can of "..." license: cc-by-nc-nd-4.0 datasets: - Blib-la/soda-cans --- # Soda Can LoRA SDXL ## Captain: The AI platform that evolves to your needs * 🚀 [Check out Captain](https://get-captain.com) * 👩‍💻 [Captain on GitHub](https://github.com/blib-la/captain) <Gallery /> [![Discord](https:&#x2F;&#x2F;img.shields.io&#x2F;discord&#x2F;1091306623819059300?color&#x3D;7289da&amp;label&#x3D;Discord&amp;logo&#x3D;discord&amp;logoColor&#x3D;fff&amp;style&#x3D;for-the-badge)](https:&#x2F;&#x2F;discord.com&#x2F;invite&#x2F;m3TBB9XEkb) ## Model Overview This model card introduces the LoRA (Low-Rank Adaptation) model trained on the Soda Can Dataset, sourced from our [Soda Can Dataset](https:&#x2F;&#x2F;huggingface.co&#x2F;datasets&#x2F;Blib-la&#x2F;soda-cans). The model excels in generating high-fidelity images of soda cans under various conditions and settings. ## Training Configuration - **Dataset**: [Soda Can Dataset](https:&#x2F;&#x2F;huggingface.co&#x2F;datasets&#x2F;Blib-la&#x2F;soda-cans) - **Epochs**: 10 - **Number of Images**: 200 - **Repeats per Image**: 10 (Each image was used 10 times during training) - **Optimizer**: DAdaptAdam (Tailored for efficient and impactful AI model training) - **Precision**: bf16 (Optimized balance between performance and memory efficiency) - **Main Trigger**: The model is triggered by the phrase &quot;a can of ...&quot; to generate images in the style of the photographer. - **Xformers**: Enabled for enhanced efficiency in transformer models - **Captioning Method**: GPT-Vision, with captions optimized for token shuffling for enhanced learning - **Base Model**: Stable Diffusion XL 1.0, renowned for its detailed and nuanced image generation capabilities ## Model Usage Specifically trained to generate a wide array of soda can images, this LoRA model can cater to creative, educational, and marketing needs. ## Performance and Limitations - **Performance**: Shows strong capability in rendering soda cans with varied designs and in different contexts. - **Limitations**: Given the dataset size of 200 images, the model might exhibit limitations in generating highly unique can designs not represented in the training set. ## Ethical Considerations - **Intended Use**: Aimed at non-commercial applications, including creative design and educational projects. - **Bias and Fairness**: Users should be cautious of potential dataset biases towards certain brands or can designs. ## Licensing - **Model License**: Licensed under Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International (CC BY-NC-ND 4.0) for non-commercial use. ## Contributions and Feedback Feedback and contributions to enhance the model are encouraged. For suggestions or contributions to the model&#39;s development, please connect through the model&#39;s Hugging Face page. ## Trigger words Use &#x60;a can of &quot;...&quot;&#x60; to prompt the image generation process. ## Download model The model&#39;s weights are available in Safetensors format for download. [Download](https://huggingface.co/Blib-la/soda_can_lora_sdxl/tree/main) them in the Files &amp; versions tab. ## Related https://blib.la/blog/crafting-the-future-blibla-s-ethical-approach-to-ai-model-training ## Additional Usage Restrictions for Blibla's LoRAs Hosted on Hugging Face In alignment with our commitment to ensuring the responsible and ethical use of our models, and in addition to the terms set forth in the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0) under which Blibla's LoRAs are licensed, we hereby impose the following specific restrictions: 1. **Prohibited Platforms**: Reuploading, redistributing, or offering of image generation services using our models on platforms not owned or operated by Blibla or Hugging Face is strictly forbidden. This includes, but is not limited to, any platforms that host, allow, or promote Not Safe For Work (NSFW) content. 2. **Explicitly Forbidden Platforms**: For clarity, and without limiting the generality of the above, platforms including but not limited to Leonardo AI, Civit AI, and any "Hugging Face spaces" that host or permit NSFW content are explicitly prohibited from hosting, or utilizing Blibla's LoRAs in any form or manner. 3. **Responsibility of Users**: Users of Blibla's LoRAs are responsible for ensuring that the environments in which they use, share, or promote our models adhere strictly to these restrictions. Violation of these terms may result in revocation of the license granted under CC BY-NC-ND 4.0 and may prompt further legal action to protect the integrity of our models and the safety of the communities we serve. 4. **Purpose of Restrictions**: These restrictions are put in place to align with Blibla's ethical standards and the intended use of our models. They are designed to prevent associations with content or platforms that do not reflect our values or the intended application of our technology. By utilizing Blibla's LoRAs, you acknowledge and agree to these additional restrictions, ensuring that the use of our models remains within the bounds of ethical and responsible practice.
Blib-la/honore_daumier_lora_sdxl
Blib-la
2024-05-15T23:42:03Z
60
4
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "dataset:Blib-la/honore_daumier_dataset", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:cc-by-nc-nd-4.0", "region:us" ]
text-to-image
2024-02-02T13:08:08Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: >- caricature of a girl wearing a tweed suit, lithography, by Honoré Daumier <lora:honore_daumier_sdxl_v1:1.0> output: url: images/00154-480187136.png - text: >- caricature of a boy wearing a superhero costume, lithography, by Honoré Daumier <lora:honore_daumier_sdxl_v1:1.0> output: url: images/00132-480187136.png - text: >- caricature of a dog wearing a police uniform, lithography, by Honoré Daumier <lora:honore_daumier_sdxl_v1:1.0> output: url: images/00141-480187136.png - text: >- caricature of a frog wearing a police uniform, lithography, by Honoré Daumier <lora:honore_daumier_sdxl_v1:1.0> output: url: images/00143-480187136.png - text: >- caricature of an alien, lithography, by Honoré Daumier <lora:honore_daumier_sdxl_v1:1.0> output: url: images/00162-2919928779.png - text: >- caricature of a viking warrior, lithography, by Honoré Daumier <lora:honore_daumier_sdxl_v1:1.0> output: url: images/00173-1015849521.png - text: >- caricature of a basketball player, lithography, by Honoré Daumier <lora:honore_daumier_sdxl_v1:1.0> output: url: images/00174-1135926644.png - text: >- caricature of a cyborg robot, lithography, by Honoré Daumier <lora:honore_daumier_sdxl_v1:1.0> output: url: images/00163-364911675.png - text: >- caricature of an eagle, lithography, by Honoré Daumier <lora:honore_daumier_sdxl_v1:1.0> output: url: images/00179-3969836619.png - text: >- caricature of a cowboy, lithography, by Honoré Daumier <lora:honore_daumier_sdxl_v1:1.0> output: url: images/00190-4023438497.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: caricature, lithography by Honoré Daumier license: cc-by-nc-nd-4.0 datasets: - Blib-la/honore_daumier_dataset --- # Honoré Daumier ## Captain: The AI platform that evolves to your needs * 🚀 [Check out Captain](https://get-captain.com) * 👩‍💻 [Captain on GitHub](https://github.com/blib-la/captain) <Gallery /> ## Model description [![Discord](https:&#x2F;&#x2F;img.shields.io&#x2F;discord&#x2F;1091306623819059300?color&#x3D;7289da&amp;label&#x3D;Discord&amp;logo&#x3D;discord&amp;logoColor&#x3D;fff&amp;style&#x3D;for-the-badge)](https:&#x2F;&#x2F;discord.com&#x2F;invite&#x2F;m3TBB9XEkb) # Daumier LoRA Model This model card represents a LoRA (Low-Rank Adaptation) model trained on the Honoré Daumier Dataset available at [Honoré Daumier Dataset](https:&#x2F;&#x2F;huggingface.co&#x2F;datasets&#x2F;Blib-la&#x2F;honore_daumier_dataset). The model is adept at generating and analyzing satirical lithographic caricatures with exaggerated features, typical of Daumier&#39;s style. ## Model Overview The Daumier LoRA model has been trained to capture the essence of Honoré Daumier&#39;s satirical lithographs, which feature prominent &#39;big head&#39; caricatures that are well-known for their social and political commentary during the 19th century. ## Training Configuration - **Dataset**: [Honoré Daumier Dataset](https:&#x2F;&#x2F;huggingface.co&#x2F;datasets&#x2F;Blib-la&#x2F;honore_daumier_dataset) - **Epochs**: 14 - **Number of Images**: 67 - **Repeats per Image**: 10 (Each image was presented ten times throughout the training process to enhance detail recognition) - **Optimizer**: DAdaptAdam (Chosen for its robustness in LoRA model training) - **Precision**: bf16 (Balances computational performance with memory efficiency) - **Main Trigger**: The model is fine-tuned to recognize and replicate Daumier&#39;s unique caricature style. - **Xformers**: Implemented for more efficient transformer model operations - **Captioning Method**: GPT-Vision (Employs GPT-Vision to generate captions that guide the AI&#39;s learning process), with token shuffling for improved training - **Base Model**: Stable Diffusion XL 1.0 (Used for its advanced capabilities in detailed image generation) ## Model Usage The Daumier LoRA model is specifically trained to generate images that replicate the satirical and exaggerated style characteristic of Honoré Daumier&#39;s lithographs. ## Performance and Limitations - **Performance**: The model performs effectively in generating caricatures with exaggerated features and capturing the satirical essence of Daumier&#39;s artwork. - **Limitations**: Given the specific focus of the dataset on Daumier&#39;s work, the model may not generalize well to other styles of caricature or art forms. ## Ethical Considerations - **Intended Use**: Intended for non-commercial applications such as art historical education, pattern recognition in artwork, and digital art creation. - **Bias and Fairness**: The model reflects the societal and cultural biases present in 19th-century France as depicted in Daumier&#39;s work. ## Licensing - **Model License**: Licensed under Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International (CC BY-NC-ND 4.0) for non-commercial use. ## Contributions and Feedback We invite feedback and contributions to further refine the model&#39;s capabilities. Visit the model&#39;s page on Hugging Face for more information on how you can contribute or provide feedback. ## Trigger words You should use `caricature` to trigger the image generation. You should use `lithography by Honoré Daumier` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Blib-la/honore_daumier_lora_sdxl/tree/main) them in the Files & versions tab. ## Related https://blib.la/blog/crafting-the-future-blibla-s-ethical-approach-to-ai-model-training ## Additional Usage Restrictions for Blibla's LoRAs Hosted on Hugging Face In alignment with our commitment to ensuring the responsible and ethical use of our models, and in addition to the terms set forth in the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0) under which Blibla's LoRAs are licensed, we hereby impose the following specific restrictions: 1. **Prohibited Platforms**: Reuploading, redistributing, or offering of image generation services using our models on platforms not owned or operated by Blibla or Hugging Face is strictly forbidden. This includes, but is not limited to, any platforms that host, allow, or promote Not Safe For Work (NSFW) content. 2. **Explicitly Forbidden Platforms**: For clarity, and without limiting the generality of the above, platforms including but not limited to Leonardo AI, Civit AI, and any "Hugging Face spaces" that host or permit NSFW content are explicitly prohibited from hosting, or utilizing Blibla's LoRAs in any form or manner. 3. **Responsibility of Users**: Users of Blibla's LoRAs are responsible for ensuring that the environments in which they use, share, or promote our models adhere strictly to these restrictions. Violation of these terms may result in revocation of the license granted under CC BY-NC-ND 4.0 and may prompt further legal action to protect the integrity of our models and the safety of the communities we serve. 4. **Purpose of Restrictions**: These restrictions are put in place to align with Blibla's ethical standards and the intended use of our models. They are designed to prevent associations with content or platforms that do not reflect our values or the intended application of our technology. By utilizing Blibla's LoRAs, you acknowledge and agree to these additional restrictions, ensuring that the use of our models remains within the bounds of ethical and responsible practice.
Blib-la/thomas_rowlandson_lora_sdxl
Blib-la
2024-05-15T23:41:45Z
38
3
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "dataset:Blib-la/thomas_rowlandson_dataset", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:cc-by-nc-nd-4.0", "region:us" ]
text-to-image
2024-02-01T21:15:23Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: >- a lizard riding a bicycle in space, hand colored etching, by thomas rowlandson <lora:thomas_rowlandson_sdxl_v1-000020:1.0> output: url: images/00238-2223492874.png - text: >- a cyborg robot sitting on the couch watching television while eating a cake, hand colored etching, by thomas rowlandson <lora:thomas_rowlandson_sdxl_v1-000020:1.0> output: url: images/00260-1848573368.png - text: >- a kangaroo riding jet-skis on the lake, hand colored etching, by thomas rowlandson <lora:thomas_rowlandson_sdxl_v1-000020:1.0> output: url: images/00269-3351763931.png - text: >- a frog sitting in a chair watching television and eating a soup, hand colored etching, by thomas rowlandson <lora:thomas_rowlandson_sdxl_v1-000020:1.0> output: url: images/00275-3938854558.png - text: >- a chameleon wearing a superhero costume, hand colored etching, by thomas rowlandson <lora:thomas_rowlandson_sdxl_v1-000020:1.0> output: url: images/00215-900192190.png - text: >- a monkey selling bananas in the jungle, hand colored etching, by thomas rowlandson <lora:thomas_rowlandson_sdxl_v1-000020:1.0> output: url: images/00265-347247939.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: hand painted etching by thomas rowlandson license: cc-by-nc-nd-4.0 datasets: - Blib-la/thomas_rowlandson_dataset --- # Thomas Rowlandson ## Captain: The AI platform that evolves to your needs * 🚀 [Check out Captain](https://get-captain.com) * 👩‍💻 [Captain on GitHub](https://github.com/blib-la/captain) <Gallery /> ## Model description --- [![Discord](https:&#x2F;&#x2F;img.shields.io&#x2F;discord&#x2F;1091306623819059300?color&#x3D;7289da&amp;label&#x3D;Discord&amp;logo&#x3D;discord&amp;logoColor&#x3D;fff&amp;style&#x3D;for-the-badge)](https:&#x2F;&#x2F;discord.com&#x2F;invite&#x2F;m3TBB9XEkb) ## Model Overview This model card introduces the Rowlandson Visions LoRA (Low-Rank Adaptation) model, trained on the Thomas Rowlandson Hand-Colored Etchings Collection. Accessible from our [Thomas Rowlandson Collection](https:&#x2F;&#x2F;huggingface.co&#x2F;datasets&#x2F;Blib-la&#x2F;thomas_rowlandson_dataset), this model is tailored to recognize, analyze, and generate imagery inspired by Thomas Rowlandson&#39;s distinctive hand-colored etchings. These artworks, originating from the late 18th to early 19th century, offer a satirical and picturesque view of British life, society, and landscapes. The Rowlandson Visions LoRA model captures the essence of Rowlandson&#39;s artistry, enabling the generation of images that reflect his unique style and thematic focus. ## Training Configuration - **Dataset**: [Thomas Rowlandson Collection](https:&#x2F;&#x2F;huggingface.co&#x2F;datasets&#x2F;Blib-la&#x2F;thomas_rowlandson_dataset) - **Epochs**: 20 - **Number of Images**: 80 - **Repeats per Image**: 10 (Employing each image multiple times during training to deepen model understanding) - **Optimizer**: DAdaptAdam (Chosen for its adaptability and performance with LoRA models) - **Precision**: bf16 (For a balanced approach to computational efficiency and memory management) - **Main Trigger**: Activation by the phrase &quot;hand painted etching by thomas rowlandson&quot; to generate images reflecting Rowlandson&#39;s style. - **Xformers**: Utilized to improve transformer model operations - **Captioning Method**: Employing GPT-Vision with captions designed for token shuffling to maximize learning outcomes - **Base Model**: Built on the foundation of Stable Diffusion XL 1.0, known for its intricate and nuanced image generation ## Model Usage The Rowlandson Visions LoRA model excels in producing images that mimic Thomas Rowlandson&#39;s renowned hand-colored etchings. It serves as a powerful tool for exploring historical art styles, educational purposes, and artistic inspiration, offering insights into British culture and humor of the era. ## Performance and Limitations - **Performance**: Demonstrates a robust capability to generate art that aligns with Rowlandson&#39;s hand-colored etchings, capturing the humor and critique of British society. - **Limitations**: Given the dataset&#39;s focus on Rowlandson&#39;s work, the model might not fully encompass the broader spectrum of 18th and 19th-century art styles. ## Ethical Considerations - **Intended Use**: Crafted for scholarly, educational, and non-commercial creative endeavors, aiming to foster appreciation and understanding of historical art. - **Bias and Fairness**: Reflects biases inherent in Rowlandson&#39;s subjects and perspectives, indicative of his time&#39;s societal attitudes. ## Licensing - **Model License**: Licensed under Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International (CC BY-NC-ND 4.0) for non-commercial use. ## Contributions and Feedback We welcome and value your contributions to improve the model and feedback regarding its performance. For further engagement, visit the model&#39;s Hugging Face page to share your insights or contribute. ## Trigger words You should use `hand painted etching by thomas rowlandson` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Blib-la/thomas_rowlandson_lora_sdxl/tree/main) them in the Files & versions tab. ## Related https://blib.la/blog/crafting-the-future-blibla-s-ethical-approach-to-ai-model-training ## Additional Usage Restrictions for Blibla's LoRAs Hosted on Hugging Face In alignment with our commitment to ensuring the responsible and ethical use of our models, and in addition to the terms set forth in the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0) under which Blibla's LoRAs are licensed, we hereby impose the following specific restrictions: 1. **Prohibited Platforms**: Reuploading, redistributing, or offering of image generation services using our models on platforms not owned or operated by Blibla or Hugging Face is strictly forbidden. This includes, but is not limited to, any platforms that host, allow, or promote Not Safe For Work (NSFW) content. 2. **Explicitly Forbidden Platforms**: For clarity, and without limiting the generality of the above, platforms including but not limited to Leonardo AI, Civit AI, and any "Hugging Face spaces" that host or permit NSFW content are explicitly prohibited from hosting, or utilizing Blibla's LoRAs in any form or manner. 3. **Responsibility of Users**: Users of Blibla's LoRAs are responsible for ensuring that the environments in which they use, share, or promote our models adhere strictly to these restrictions. Violation of these terms may result in revocation of the license granted under CC BY-NC-ND 4.0 and may prompt further legal action to protect the integrity of our models and the safety of the communities we serve. 4. **Purpose of Restrictions**: These restrictions are put in place to align with Blibla's ethical standards and the intended use of our models. They are designed to prevent associations with content or platforms that do not reflect our values or the intended application of our technology. By utilizing Blibla's LoRAs, you acknowledge and agree to these additional restrictions, ensuring that the use of our models remains within the bounds of ethical and responsible practice.
RichardErkhov/beowolx_-_CodeNinja-1.0-OpenChat-7B-8bits
RichardErkhov
2024-05-15T23:40:32Z
76
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-15T23:34:28Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) CodeNinja-1.0-OpenChat-7B - bnb 8bits - Model creator: https://huggingface.co/beowolx/ - Original model: https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B/ Original model description: --- license: mit datasets: - glaiveai/glaive-code-assistant-v2 - TokenBender/code_instructions_122k_alpaca_style language: - en metrics: - code_eval pipeline_tag: text-generation tags: - code - text-generation-inference --- <p align="center"> <img width="700px" alt="DeepSeek Coder" src="https://cdn-uploads.huggingface.co/production/uploads/64b566ab04fa6584c03b5247/5COagfF6EwrV4utZJ-ClI.png"> </p> <hr> # CodeNinja: Your Advanced Coding Assistant ## Overview CodeNinja is an enhanced version of the renowned model [openchat/openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210). It having been fine-tuned through Supervised Fine Tuning on two expansive datasets, encompassing over 400,000 coding instructions. Designed to be an indispensable tool for coders, CodeNinja aims to integrate seamlessly into your daily coding routine. Discover the quantized versions at: [beowolx/CodeNinja-1.0-OpenChat-7B-GGUF](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B-GGUF). ### Key Features - **Expansive Training Database**: CodeNinja has been refined with datasets from [glaiveai/glaive-code-assistant-v2](https://huggingface.co/datasets/glaiveai/glaive-code-assistant-v2) and [TokenBender/code_instructions_122k_alpaca_style](https://huggingface.co/datasets/TokenBender/code_instructions_122k_alpaca_style), incorporating around 400,000 coding instructions across various languages including Python, C, C++, Rust, Java, JavaScript, and more. - **Flexibility and Scalability**: Available in a 7B model size, CodeNinja is adaptable for local runtime environments. - **Advanced Code Completion**: With a substantial context window size of 8192, it supports comprehensive project-level code completion. ## Prompt Format CodeNinja maintains the same prompt structure as OpenChat 3.5. Effective utilization requires adherence to this format: ``` GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant: ``` 🚨 Important: Ensure the use of `<|end_of_turn|>` as the end-of-generation token. **Adhering to this format is crucial for optimal results.** ## Usage Instructions ### Using LM Studio The simplest way to engage with CodeNinja is via the [quantized versions](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B-GGUF) on [LM Studio](https://lmstudio.ai/). Ensure you select the "OpenChat" preset, which incorporates the necessary prompt format. The preset is also available in this [gist](https://gist.github.com/beowolx/b219466681c02ff67baf8f313a3ad817). ### Using the Transformers Library ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch # Initialize the model model_path = "beowolx/CodeNinja-1.0-OpenChat-7B" model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto") # Load the OpenChat tokenizer tokenizer = AutoTokenizer.from_pretrained("openchat/openchat-3.5-1210", use_fast=True) def generate_one_completion(prompt: str): messages = [ {"role": "user", "content": prompt}, {"role": "assistant", "content": ""} # Model response placeholder ] # Generate token IDs using the chat template input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True) # Produce completion generate_ids = model.generate( torch.tensor([input_ids]).to("cuda"), max_length=256, pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id ) # Process the completion completion = tokenizer.decode(generate_ids[0], skip_special_tokens=True) completion = completion.split("\n\n\n")[0].strip() return completion ``` ## License CodeNinja is licensed under the MIT License, with model usage subject to the Model License. ## Contact For queries or support, please open an issue in the repository.
aben118/tiny-common-voice-finetuning
aben118
2024-05-15T23:39:49Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:openai/whisper-tiny", "base_model:adapter:openai/whisper-tiny", "region:us" ]
null
2024-05-15T19:19:17Z
--- library_name: peft base_model: openai/whisper-tiny --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
RichardErkhov/unsloth_-_gemma-7b-8bits
RichardErkhov
2024-05-15T23:38:40Z
4
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-15T23:30:48Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gemma-7b - bnb 8bits - Model creator: https://huggingface.co/unsloth/ - Original model: https://huggingface.co/unsloth/gemma-7b/ Original model description: --- language: - en license: apache-2.0 library_name: transformers tags: - unsloth - transformers - gemma - gemma-7b --- # Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth! [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less | | **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
KvrParaskevi/Llama-2-7b-Hotel-Booking-Model
KvrParaskevi
2024-05-15T23:37:28Z
14
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "dataset:KvrParaskevi/hotel_data", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-21T15:47:33Z
--- library_name: transformers license: mit datasets: - KvrParaskevi/hotel_data language: - en --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Litzy619/G0515HMA11H
Litzy619
2024-05-15T23:36:34Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:google/gemma-2b", "base_model:finetune:google/gemma-2b", "license:gemma", "region:us" ]
null
2024-05-15T19:00:39Z
--- license: gemma base_model: google/gemma-2b tags: - generated_from_trainer model-index: - name: G0515HMA11H results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # G0515HMA11H This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1345 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.227 | 0.09 | 10 | 2.9881 | | 2.7548 | 0.18 | 20 | 2.3671 | | 1.9906 | 0.27 | 30 | 1.5123 | | 1.1147 | 0.36 | 40 | 0.6531 | | 0.4077 | 0.45 | 50 | 0.2179 | | 0.1821 | 0.54 | 60 | 0.1575 | | 0.1542 | 0.63 | 70 | 0.1511 | | 0.1527 | 0.73 | 80 | 0.1501 | | 0.1431 | 0.82 | 90 | 0.1497 | | 0.1459 | 0.91 | 100 | 0.1482 | | 0.1489 | 1.0 | 110 | 0.1489 | | 0.1434 | 1.09 | 120 | 0.1488 | | 0.1448 | 1.18 | 130 | 0.1497 | | 0.1469 | 1.27 | 140 | 0.1477 | | 0.1493 | 1.36 | 150 | 0.1477 | | 0.1428 | 1.45 | 160 | 0.1508 | | 0.1449 | 1.54 | 170 | 0.1477 | | 0.1458 | 1.63 | 180 | 0.1469 | | 0.1458 | 1.72 | 190 | 0.1480 | | 0.1453 | 1.81 | 200 | 0.1481 | | 0.1472 | 1.9 | 210 | 0.1474 | | 0.1472 | 1.99 | 220 | 0.1464 | | 0.143 | 2.08 | 230 | 0.1451 | | 0.1382 | 2.18 | 240 | 0.1433 | | 0.1395 | 2.27 | 250 | 0.1442 | | 0.1397 | 2.36 | 260 | 0.1415 | | 0.1377 | 2.45 | 270 | 0.1396 | | 0.134 | 2.54 | 280 | 0.1367 | | 0.1346 | 2.63 | 290 | 0.1355 | | 0.1313 | 2.72 | 300 | 0.1351 | | 0.1338 | 2.81 | 310 | 0.1346 | | 0.1326 | 2.9 | 320 | 0.1345 | | 0.136 | 2.99 | 330 | 0.1345 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.14.1
Litzy619/G0515HMA12H
Litzy619
2024-05-15T23:33:05Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:google/gemma-2b", "base_model:finetune:google/gemma-2b", "license:gemma", "region:us" ]
null
2024-05-15T19:09:21Z
--- license: gemma base_model: google/gemma-2b tags: - generated_from_trainer model-index: - name: G0515HMA12H results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # G0515HMA12H This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1460 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.2187 | 0.09 | 10 | 2.8863 | | 2.6139 | 0.18 | 20 | 2.1690 | | 1.7394 | 0.27 | 30 | 1.1992 | | 0.8092 | 0.36 | 40 | 0.3519 | | 0.2397 | 0.45 | 50 | 0.1653 | | 0.1632 | 0.54 | 60 | 0.1528 | | 0.1508 | 0.63 | 70 | 0.1490 | | 0.1508 | 0.73 | 80 | 0.1496 | | 0.1423 | 0.82 | 90 | 0.1487 | | 0.1454 | 0.91 | 100 | 0.1475 | | 0.149 | 1.0 | 110 | 0.1485 | | 0.1436 | 1.09 | 120 | 0.1488 | | 0.1452 | 1.18 | 130 | 0.1485 | | 0.146 | 1.27 | 140 | 0.1474 | | 0.1489 | 1.36 | 150 | 0.1467 | | 0.1431 | 1.45 | 160 | 0.1491 | | 0.1451 | 1.54 | 170 | 0.1482 | | 0.1458 | 1.63 | 180 | 0.1474 | | 0.1466 | 1.72 | 190 | 0.1479 | | 0.1461 | 1.81 | 200 | 0.1493 | | 0.1481 | 1.9 | 210 | 0.1481 | | 0.1479 | 1.99 | 220 | 0.1481 | | 0.1452 | 2.08 | 230 | 0.1480 | | 0.143 | 2.18 | 240 | 0.1472 | | 0.1441 | 2.27 | 250 | 0.1471 | | 0.1462 | 2.36 | 260 | 0.1472 | | 0.1433 | 2.45 | 270 | 0.1469 | | 0.1429 | 2.54 | 280 | 0.1466 | | 0.1423 | 2.63 | 290 | 0.1464 | | 0.1427 | 2.72 | 300 | 0.1461 | | 0.1443 | 2.81 | 310 | 0.1460 | | 0.1438 | 2.9 | 320 | 0.1460 | | 0.1443 | 2.99 | 330 | 0.1460 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.0
Litzy619/G0515HMA25H
Litzy619
2024-05-15T23:28:03Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:google/gemma-2b", "base_model:finetune:google/gemma-2b", "license:gemma", "region:us" ]
null
2024-05-15T22:10:53Z
--- license: gemma base_model: google/gemma-2b tags: - generated_from_trainer model-index: - name: G0515HMA25H results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # G0515HMA25H This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1467 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 80 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.1711 | 0.09 | 10 | 2.8409 | | 2.4174 | 0.18 | 20 | 1.7855 | | 1.261 | 0.27 | 30 | 0.6090 | | 0.3351 | 0.36 | 40 | 0.1775 | | 0.1606 | 0.45 | 50 | 0.1523 | | 0.1525 | 0.54 | 60 | 0.1507 | | 0.1504 | 0.63 | 70 | 0.1498 | | 0.1509 | 0.73 | 80 | 0.1495 | | 0.143 | 0.82 | 90 | 0.1495 | | 0.1458 | 0.91 | 100 | 0.1483 | | 0.1491 | 1.0 | 110 | 0.1486 | | 0.1434 | 1.09 | 120 | 0.1490 | | 0.1451 | 1.18 | 130 | 0.1491 | | 0.1464 | 1.27 | 140 | 0.1486 | | 0.1491 | 1.36 | 150 | 0.1477 | | 0.1436 | 1.45 | 160 | 0.1498 | | 0.1452 | 1.54 | 170 | 0.1485 | | 0.1456 | 1.63 | 180 | 0.1478 | | 0.1474 | 1.72 | 190 | 0.1500 | | 0.1456 | 1.81 | 200 | 0.1483 | | 0.1476 | 1.9 | 210 | 0.1482 | | 0.1472 | 1.99 | 220 | 0.1482 | | 0.1449 | 2.08 | 230 | 0.1482 | | 0.1426 | 2.18 | 240 | 0.1477 | | 0.144 | 2.27 | 250 | 0.1476 | | 0.1459 | 2.36 | 260 | 0.1477 | | 0.1436 | 2.45 | 270 | 0.1475 | | 0.1433 | 2.54 | 280 | 0.1473 | | 0.1426 | 2.63 | 290 | 0.1471 | | 0.1433 | 2.72 | 300 | 0.1468 | | 0.144 | 2.81 | 310 | 0.1467 | | 0.1438 | 2.9 | 320 | 0.1467 | | 0.1454 | 2.99 | 330 | 0.1467 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.0
abc88767/5c74
abc88767
2024-05-15T23:26:33Z
134
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-15T23:24:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
franciscobdl/EstigiaxTinyLlama1.1
franciscobdl
2024-05-15T23:26:12Z
150
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-15T23:04:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details same pero deberia ser mas consciente de que es un satelite ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Yoda99/leagaleasy-llama-3-instruct-v3
Yoda99
2024-05-15T23:24:39Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
null
2024-05-15T07:32:12Z
--- license: llama3 library_name: peft tags: - trl - sft - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B-Instruct datasets: - generator model-index: - name: leagaleasy-llama-3-instruct-v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # leagaleasy-llama-3-instruct-v3 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
abc88767/3sc74
abc88767
2024-05-15T23:20:44Z
134
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-15T23:18:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
stefaniftime/finetuned-falcon
stefaniftime
2024-05-15T23:20:08Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:ybelkada/falcon-7b-sharded-bf16", "base_model:adapter:ybelkada/falcon-7b-sharded-bf16", "region:us" ]
null
2024-05-15T23:19:52Z
--- library_name: peft tags: - trl - sft - generated_from_trainer base_model: ybelkada/falcon-7b-sharded-bf16 model-index: - name: finetuned-falcon results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-falcon This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.2 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.19.1
RichardErkhov/froggeric_-_WestLake-10.7B-v2-gguf
RichardErkhov
2024-05-15T23:14:07Z
56
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2024-05-15T20:09:28Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) WestLake-10.7B-v2 - GGUF - Model creator: https://huggingface.co/froggeric/ - Original model: https://huggingface.co/froggeric/WestLake-10.7B-v2/ | Name | Quant method | Size | | ---- | ---- | ---- | | [WestLake-10.7B-v2.Q2_K.gguf](https://huggingface.co/RichardErkhov/froggeric_-_WestLake-10.7B-v2-gguf/blob/main/WestLake-10.7B-v2.Q2_K.gguf) | Q2_K | 3.73GB | | [WestLake-10.7B-v2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/froggeric_-_WestLake-10.7B-v2-gguf/blob/main/WestLake-10.7B-v2.IQ3_XS.gguf) | IQ3_XS | 4.14GB | | [WestLake-10.7B-v2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/froggeric_-_WestLake-10.7B-v2-gguf/blob/main/WestLake-10.7B-v2.IQ3_S.gguf) | IQ3_S | 4.37GB | | [WestLake-10.7B-v2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/froggeric_-_WestLake-10.7B-v2-gguf/blob/main/WestLake-10.7B-v2.Q3_K_S.gguf) | Q3_K_S | 4.34GB | | [WestLake-10.7B-v2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/froggeric_-_WestLake-10.7B-v2-gguf/blob/main/WestLake-10.7B-v2.IQ3_M.gguf) | IQ3_M | 4.51GB | | [WestLake-10.7B-v2.Q3_K.gguf](https://huggingface.co/RichardErkhov/froggeric_-_WestLake-10.7B-v2-gguf/blob/main/WestLake-10.7B-v2.Q3_K.gguf) | Q3_K | 4.84GB | | [WestLake-10.7B-v2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/froggeric_-_WestLake-10.7B-v2-gguf/blob/main/WestLake-10.7B-v2.Q3_K_M.gguf) | Q3_K_M | 4.84GB | | [WestLake-10.7B-v2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/froggeric_-_WestLake-10.7B-v2-gguf/blob/main/WestLake-10.7B-v2.Q3_K_L.gguf) | Q3_K_L | 5.26GB | | [WestLake-10.7B-v2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/froggeric_-_WestLake-10.7B-v2-gguf/blob/main/WestLake-10.7B-v2.IQ4_XS.gguf) | IQ4_XS | 5.43GB | | [WestLake-10.7B-v2.Q4_0.gguf](https://huggingface.co/RichardErkhov/froggeric_-_WestLake-10.7B-v2-gguf/blob/main/WestLake-10.7B-v2.Q4_0.gguf) | Q4_0 | 5.66GB | | [WestLake-10.7B-v2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/froggeric_-_WestLake-10.7B-v2-gguf/blob/main/WestLake-10.7B-v2.IQ4_NL.gguf) | IQ4_NL | 5.72GB | | [WestLake-10.7B-v2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/froggeric_-_WestLake-10.7B-v2-gguf/blob/main/WestLake-10.7B-v2.Q4_K_S.gguf) | Q4_K_S | 5.7GB | | [WestLake-10.7B-v2.Q4_K.gguf](https://huggingface.co/RichardErkhov/froggeric_-_WestLake-10.7B-v2-gguf/blob/main/WestLake-10.7B-v2.Q4_K.gguf) | Q4_K | 6.02GB | | [WestLake-10.7B-v2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/froggeric_-_WestLake-10.7B-v2-gguf/blob/main/WestLake-10.7B-v2.Q4_K_M.gguf) | Q4_K_M | 6.02GB | | [WestLake-10.7B-v2.Q4_1.gguf](https://huggingface.co/RichardErkhov/froggeric_-_WestLake-10.7B-v2-gguf/blob/main/WestLake-10.7B-v2.Q4_1.gguf) | Q4_1 | 6.27GB | | [WestLake-10.7B-v2.Q5_0.gguf](https://huggingface.co/RichardErkhov/froggeric_-_WestLake-10.7B-v2-gguf/blob/main/WestLake-10.7B-v2.Q5_0.gguf) | Q5_0 | 6.89GB | | [WestLake-10.7B-v2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/froggeric_-_WestLake-10.7B-v2-gguf/blob/main/WestLake-10.7B-v2.Q5_K_S.gguf) | Q5_K_S | 6.89GB | | [WestLake-10.7B-v2.Q5_K.gguf](https://huggingface.co/RichardErkhov/froggeric_-_WestLake-10.7B-v2-gguf/blob/main/WestLake-10.7B-v2.Q5_K.gguf) | Q5_K | 7.08GB | | [WestLake-10.7B-v2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/froggeric_-_WestLake-10.7B-v2-gguf/blob/main/WestLake-10.7B-v2.Q5_K_M.gguf) | Q5_K_M | 7.08GB | | [WestLake-10.7B-v2.Q5_1.gguf](https://huggingface.co/RichardErkhov/froggeric_-_WestLake-10.7B-v2-gguf/blob/main/WestLake-10.7B-v2.Q5_1.gguf) | Q5_1 | 7.51GB | | [WestLake-10.7B-v2.Q6_K.gguf](https://huggingface.co/RichardErkhov/froggeric_-_WestLake-10.7B-v2-gguf/blob/main/WestLake-10.7B-v2.Q6_K.gguf) | Q6_K | 8.2GB | | [WestLake-10.7B-v2.Q8_0.gguf](https://huggingface.co/RichardErkhov/froggeric_-_WestLake-10.7B-v2-gguf/blob/main/WestLake-10.7B-v2.Q8_0.gguf) | Q8_0 | 10.62GB | Original model description: --- base_model: - senseable/WestLake-7B-v2 library_name: transformers tags: - mergekit - merge license: apache-2.0 language: - en --- # WestLake-10.7B-v2: Role-Play & Text Generation Specialist Model [GGUF version available here](https://huggingface.co/froggeric/WestLake-10.7B-v2-GGUF)\ EXL2 versions available here: [3.3bpw](https://huggingface.co/StopTryharding/WestLake-10.7B-v2-exl2-3.3) / [4.0bpw](https://huggingface.co/StopTryharding/WestLake-10.7B-v2-exl2-4.0) / [5.0bpw](https://huggingface.co/StopTryharding/WestLake-10.7B-v2-exl2-5.0) / [6.0bpw](https://huggingface.co/StopTryharding/WestLake-10.7B-v2-exl2-6.0) / [8.0bpw](https://huggingface.co/StopTryharding/WestLake-10.7B-v2-exl2-8.0) This is my first viable self-merge of the fantastic WestLake-7B-v2 model, obtained after more than 12 rounds of testing different merge configurations. In my [LLM Creativity Benchmark](https://huggingface.co/datasets/froggeric/creativity), it greatly improves over the original 7B model, and ranks between miqu-1-120b and goliath-120b! I would describe the improvements as a better writing style, with more details. It has a bit more difficulties following instructions, but not by much. It is also the first model I have tested to obtain a perfect score with the following test: ``` Write a sequence of nominal groups that flow into one another, using the following rules: - each nominal group is made of exactly 3 words - the first word of each nominal group must be the last word of the previous nominal group - the first word of the first nominal group is: "ball" - the last word of the last nominal group is: "stone" - there must be a theme, of your choosing, pertaining to all nominal groups - there must be exactly 7 nominal groups, leading from the first word (ball) to the last word (stone) - a word already used at the beginning and end of a nominal group cannot be reused Present your solution as a list numbered with roman numerals. Finally, explain why you chose your specific theme. ``` ## Usage * Base model: senseable/WestLake-7B-v2 based of Mistral-7B-v0.1 * Context size: **8192** (even though Mistral-7B is 32k, WestLake was trained with 8k, and using a larger context is likely to cause problems) * Prompt format: in general, Mistral based models are able to understand many prompt formats, but the following produce the best results, and are recommended (in order of preference) - **Alpaca** (reported by senseable as working better than ChatML, and confirmed by me) - ChatML (used during WestLake training) - Mistral Instruct (original format from Mistral-7B) - Zephyr (variant of ChatML which I have found to sometimes produce better results) ## Merge Details This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).\ This model was merged using the passthrough merge method.\ The following models were included in the merge: * [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2) The following YAML configuration was used to produce this model: ```yaml dtype: float16 merge_method: passthrough slices: - sources: - model: senseable/WestLake-7B-v2 layer_range: [0,9] - sources: - model: senseable/WestLake-7B-v2 layer_range: [5,14] - sources: - model: senseable/WestLake-7B-v2 layer_range: [10,19] - sources: - model: senseable/WestLake-7B-v2 layer_range: [15,24] - sources: - model: senseable/WestLake-7B-v2 layer_range: [20,32] ``` --- # Original model card: Westlake-7Bv2: Role-Play & Text Generation Specialist Model **Update Notes:** *Version 2 trained 1 additional epoch cycle for 3 total* Welcome to the documentation of Westlake-7B, a cutting-edge language model designed for exceptional role-play and text generation tasks. This README file aims to provide an overview of our capabilities, usage guidelines, and potential applications. ## About Westlake-7Bv2 Westlake-7B is built upon a vast corpus of diverse texts, enabling it to generate contextually relevant responses in various scenarios. With its impressive size of 7 billion parameters, this model excels at understanding nuances in language and producing creative outputs. ### Key Features 1. **Role-Play**: Westlake-7Bv2 can seamlessly adapt to different character personas and engage in dynamic conversations while maintaining consistency throughout the interaction. It can generate believable dialogues across various genres, including fiction, non-fiction, historical events, or even fantasy worlds. 2. **Text Generation**: This model is proficient at generating original content such as stories, poems, essays, news articles, and more. Its ability to capture the essence of different writing styles makes it an ideal tool for creative writers seeking inspiration or assistance in their projects. 3. **Contextual Understanding**: Westlake-7B's extensive training allows it to comprehend complex contexts and generate responses that align with given situations. It can handle multiple topics simultaneously, making it versatile across various applications. 4. **Continuous Learning**: As a language model, Westlake-7B continuously improves its performance through ongoing training on new data sets. This ensures its capabilities remain up-to-date and relevant in an ever-evolving world of communication. ## Usage Guidelines To utilize Westlake-7Bv2 for your projects or experiments, follow these steps: 1. **Prompting**: Provide clear and concise prompts that outline the desired role-play scenario or text generation task. The quality of output depends heavily on the clarity and relevance of input instructions. 2. **Feedback Loop**: For optimal results, consider incorporating a feedback loop into your application to refine generated outputs based on user preferences or additional contextual information. This iterative process can significantly enhance the model's performance in specific domains. 3. **Ethical Considerations**: As with any AI system, ensure responsible usage of Westlake-7B by avoiding harmful content generation or misuse of its capabilities. ## Potential Applications Westlake-7Bv2's versatility makes it suitable for various applications across different industries: 1. **Creative Writing**: Assist authors in generating new ideas, expanding storylines, or even completing drafts by providing creative suggestions and textual content. 2. **Education**: Enhance language learning platforms with interactive role-play scenarios to improve students' communication skills and cultural understanding. 3. **Gaming**: Integrate Westlake-7B into game engines for dynamic non-player character interactions or generating unique questlines based on player choices. 4. **Customer Support**: Leverage the model's conversational abilities to create chatbots capable of handling complex queries and providing personalized assistance. 5. **Social Media**: Develop applications that generate engaging content such as captions, status updates, or even entire posts tailored to users' preferences and interests.
abc88767/2c74
abc88767
2024-05-15T23:13:51Z
134
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-15T23:12:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jsmoraga/sd-class-butterflies-32
jsmoraga
2024-05-15T23:13:37Z
44
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2024-05-15T23:12:54Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('jsmoraga/sd-class-butterflies-32') image = pipeline().images[0] image ```
KevStrider/rl_course_vizdoom_health_gathering_supreme
KevStrider
2024-05-15T23:10:51Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-15T23:10:43Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 10.34 +/- 4.77 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r KevStrider/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
adarshheg/llama2-ft-timesheet-merged
adarshheg
2024-05-15T23:10:45Z
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-15T23:00:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
emilykang/Phi_medner-soap_chart_progressnotes
emilykang
2024-05-15T23:07:58Z
145
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-15T22:58:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NickyNicky/paligemma-3b-mix-224_loftq_V1
NickyNicky
2024-05-15T23:06:07Z
63
0
transformers
[ "transformers", "safetensors", "paligemma", "image-text-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-05-15T22:53:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
veronica-girolimetti/mistral-ft-02-RE
veronica-girolimetti
2024-05-15T23:03:04Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-15T22:57:28Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit --- # Uploaded model - **Developed by:** veronica-girolimetti - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
bhaskarSingha/maskformer-paddy-leafblast
bhaskarSingha
2024-05-15T23:01:27Z
43
0
transformers
[ "transformers", "safetensors", "maskformer", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-15T22:34:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Klookick/Taxi
Klookick
2024-05-15T23:00:38Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-05-15T23:00:29Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.62 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Klookick/Taxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
belyakoff/XLM-RoBERTa-485
belyakoff
2024-05-15T22:59:10Z
4
1
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-05-15T22:11:47Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # XLMRoberta-485 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Это пример предложения"] model = SentenceTransformer('belyakoff/XLMRoberta-485') embeddings = model.encode(sentences) print(embeddings) ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors K. Belyakoff MKSKOM
Klookick/Frozen
Klookick
2024-05-15T22:58:39Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-05-15T22:58:37Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: Frozen results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Klookick/Frozen", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
emilykang/Phi_medner-soap_chart_progressnotes_lora
emilykang
2024-05-15T22:58:25Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-05-15T22:49:19Z
--- license: mit library_name: peft tags: - trl - sft - generated_from_trainer base_model: microsoft/phi-2 datasets: - generator model-index: - name: Phi_medner-soap_chart_progressnotes_lora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Phi_medner-soap_chart_progressnotes_lora This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 10 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu118 - Datasets 2.19.0 - Tokenizers 0.19.1
dtorber/BioNLP-tech-intro-disc-eLife
dtorber
2024-05-15T22:56:05Z
11
0
transformers
[ "transformers", "safetensors", "led", "text2text-generation", "summarization", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2024-05-12T09:56:00Z
--- tags: - summarization - generated_from_trainer model-index: - name: BioNLP-tech-intro-disc-eLife results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BioNLP-tech-intro-disc-eLife This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.3739167643078955e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 1.13.1+cu117 - Datasets 2.16.1 - Tokenizers 0.15.2
PardisSzah/Movie_Genre_Classifier
PardisSzah
2024-05-15T22:52:32Z
109
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-15T22:51:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
naragam/q-taxi-v3
naragam
2024-05-15T22:51:17Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-05-15T22:51:15Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.72 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="naragam/q-taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Jubliano/wav2vec2-large-xls-r-300m-ipa-nl
Jubliano
2024-05-15T22:50:47Z
8
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-15T09:49:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
emilykang/Phi_medner-gastroenterology
emilykang
2024-05-15T22:49:06Z
146
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-15T22:39:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Litzy619/G0515HMA10H
Litzy619
2024-05-15T22:46:23Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:google/gemma-2b", "base_model:finetune:google/gemma-2b", "license:gemma", "region:us" ]
null
2024-05-15T18:59:37Z
--- license: gemma base_model: google/gemma-2b tags: - generated_from_trainer model-index: - name: G0515HMA10H results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # G0515HMA10H This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1427 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.2377 | 0.09 | 10 | 2.9659 | | 2.85 | 0.18 | 20 | 2.5531 | | 2.2228 | 0.27 | 30 | 1.8111 | | 1.4488 | 0.36 | 40 | 1.0090 | | 0.7045 | 0.45 | 50 | 0.3525 | | 0.2451 | 0.54 | 60 | 0.1706 | | 0.1632 | 0.63 | 70 | 0.1538 | | 0.1549 | 0.73 | 80 | 0.1508 | | 0.1449 | 0.82 | 90 | 0.1510 | | 0.1466 | 0.91 | 100 | 0.1481 | | 0.1498 | 1.0 | 110 | 0.1498 | | 0.1444 | 1.09 | 120 | 0.1492 | | 0.1455 | 1.18 | 130 | 0.1488 | | 0.1457 | 1.27 | 140 | 0.1472 | | 0.1491 | 1.36 | 150 | 0.1466 | | 0.1427 | 1.45 | 160 | 0.1491 | | 0.1444 | 1.54 | 170 | 0.1473 | | 0.1456 | 1.63 | 180 | 0.1471 | | 0.1462 | 1.72 | 190 | 0.1493 | | 0.1459 | 1.81 | 200 | 0.1477 | | 0.1479 | 1.9 | 210 | 0.1468 | | 0.1475 | 1.99 | 220 | 0.1467 | | 0.1446 | 2.08 | 230 | 0.1464 | | 0.1403 | 2.18 | 240 | 0.1453 | | 0.1423 | 2.27 | 250 | 0.1450 | | 0.1432 | 2.36 | 260 | 0.1447 | | 0.142 | 2.45 | 270 | 0.1447 | | 0.1407 | 2.54 | 280 | 0.1444 | | 0.141 | 2.63 | 290 | 0.1438 | | 0.1394 | 2.72 | 300 | 0.1430 | | 0.1417 | 2.81 | 310 | 0.1427 | | 0.141 | 2.9 | 320 | 0.1427 | | 0.1429 | 2.99 | 330 | 0.1427 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.14.1
Litzy619/G0515HMA23H
Litzy619
2024-05-15T22:39:18Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:google/gemma-2b", "base_model:finetune:google/gemma-2b", "license:gemma", "region:us" ]
null
2024-05-15T21:49:33Z
--- license: gemma base_model: google/gemma-2b tags: - generated_from_trainer model-index: - name: G0515HMA23H results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # G0515HMA23H This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1322 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 80 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.1841 | 0.09 | 10 | 2.8751 | | 2.5107 | 0.18 | 20 | 1.9774 | | 1.4768 | 0.27 | 30 | 0.8765 | | 0.4892 | 0.36 | 40 | 0.2067 | | 0.1718 | 0.45 | 50 | 0.1561 | | 0.1522 | 0.54 | 60 | 0.1523 | | 0.1497 | 0.63 | 70 | 0.1483 | | 0.15 | 0.73 | 80 | 0.1496 | | 0.1434 | 0.82 | 90 | 0.1502 | | 0.1464 | 0.91 | 100 | 0.1485 | | 0.1496 | 1.0 | 110 | 0.1545 | | 0.1448 | 1.09 | 120 | 0.1500 | | 0.1452 | 1.18 | 130 | 0.1482 | | 0.1457 | 1.27 | 140 | 0.1470 | | 0.1489 | 1.36 | 150 | 0.1466 | | 0.1425 | 1.45 | 160 | 0.1505 | | 0.1444 | 1.54 | 170 | 0.1477 | | 0.1463 | 1.63 | 180 | 0.1462 | | 0.1459 | 1.72 | 190 | 0.1485 | | 0.1459 | 1.81 | 200 | 0.1476 | | 0.1475 | 1.9 | 210 | 0.1470 | | 0.1469 | 1.99 | 220 | 0.1461 | | 0.1429 | 2.08 | 230 | 0.1453 | | 0.1386 | 2.18 | 240 | 0.1431 | | 0.14 | 2.27 | 250 | 0.1416 | | 0.1406 | 2.36 | 260 | 0.1402 | | 0.1379 | 2.45 | 270 | 0.1397 | | 0.1349 | 2.54 | 280 | 0.1366 | | 0.1358 | 2.63 | 290 | 0.1348 | | 0.1319 | 2.72 | 300 | 0.1333 | | 0.1337 | 2.81 | 310 | 0.1322 | | 0.1322 | 2.9 | 320 | 0.1322 | | 0.1335 | 2.99 | 330 | 0.1322 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.0
NickyNicky/paligemma-3b-mix-224_dora_V0
NickyNicky
2024-05-15T22:35:17Z
63
0
transformers
[ "transformers", "safetensors", "paligemma", "image-text-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-05-15T22:32:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
YasaminAbb/Llama-2-7b-CNN_Q_lora_Summarizer
YasaminAbb
2024-05-15T22:34:59Z
2
1
peft
[ "peft", "endpoints_compatible", "region:us" ]
null
2024-05-01T20:58:21Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0
saad0510/codegemma-7b-code-grader
saad0510
2024-05-15T22:31:38Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-15T03:05:46Z
--- license: apache-2.0 ---
iloncka/data_simple_orig_obj-clust_spl-subs_4_v_4_xresnet50_ep_20
iloncka
2024-05-15T22:29:00Z
0
0
fastai
[ "fastai", "region:us" ]
null
2024-05-15T22:28:20Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
wangkuiyi/vicuna-7b-v1.3
wangkuiyi
2024-05-15T22:26:37Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-15T22:12:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID This is the SafeTensor format of Vicuna-7b-v1.3 ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BANA577/Llama3-Michael-7
BANA577
2024-05-15T22:25:50Z
2
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "autotrain", "text-generation-inference", "peft", "conversational", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-15T22:15:57Z
--- tags: - autotrain - text-generation-inference - text-generation - peft library_name: transformers widget: - messages: - role: user content: What is your favorite condiment? license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
emilykang/Phi_medner-obstetrics_gynecology
emilykang
2024-05-15T22:25:46Z
146
1
transformers
[ "transformers", "safetensors", "phi", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-15T22:14:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Litzy619/G0515HMA9H
Litzy619
2024-05-15T22:20:37Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:google/gemma-2b", "base_model:finetune:google/gemma-2b", "license:gemma", "region:us" ]
null
2024-05-15T18:57:17Z
--- license: gemma base_model: google/gemma-2b tags: - generated_from_trainer model-index: - name: G0515HMA9H results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # G0515HMA9H This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1309 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.2171 | 0.09 | 10 | 2.8982 | | 2.6882 | 0.18 | 20 | 2.2647 | | 1.8484 | 0.27 | 30 | 1.3286 | | 0.9274 | 0.36 | 40 | 0.4311 | | 0.2813 | 0.45 | 50 | 0.1829 | | 0.1669 | 0.54 | 60 | 0.1548 | | 0.153 | 0.63 | 70 | 0.1492 | | 0.1515 | 0.73 | 80 | 0.1494 | | 0.1428 | 0.82 | 90 | 0.1492 | | 0.1454 | 0.91 | 100 | 0.1488 | | 0.1497 | 1.0 | 110 | 0.1486 | | 0.1434 | 1.09 | 120 | 0.1489 | | 0.145 | 1.18 | 130 | 0.1479 | | 0.1455 | 1.27 | 140 | 0.1470 | | 0.1485 | 1.36 | 150 | 0.1464 | | 0.1421 | 1.45 | 160 | 0.1494 | | 0.1446 | 1.54 | 170 | 0.1463 | | 0.1448 | 1.63 | 180 | 0.1449 | | 0.1462 | 1.72 | 190 | 0.1491 | | 0.1455 | 1.81 | 200 | 0.1469 | | 0.1471 | 1.9 | 210 | 0.1459 | | 0.146 | 1.99 | 220 | 0.1460 | | 0.1423 | 2.08 | 230 | 0.1442 | | 0.136 | 2.18 | 240 | 0.1406 | | 0.1376 | 2.27 | 250 | 0.1414 | | 0.1378 | 2.36 | 260 | 0.1390 | | 0.1353 | 2.45 | 270 | 0.1366 | | 0.1322 | 2.54 | 280 | 0.1349 | | 0.13 | 2.63 | 290 | 0.1321 | | 0.1292 | 2.72 | 300 | 0.1310 | | 0.1317 | 2.81 | 310 | 0.1307 | | 0.1315 | 2.9 | 320 | 0.1309 | | 0.1299 | 2.99 | 330 | 0.1309 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.14.1
antitheft159/mozartKazakhstan
antitheft159
2024-05-15T22:16:05Z
0
0
null
[ "license:cc-by-sa-4.0", "region:us" ]
null
2024-05-15T22:14:52Z
--- license: cc-by-sa-4.0 ---
emilykang/Phi_medner-obstetrics_gynecology_lora
emilykang
2024-05-15T22:13:50Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-05-15T22:00:38Z
--- license: mit library_name: peft tags: - trl - sft - generated_from_trainer base_model: microsoft/phi-2 datasets: - generator model-index: - name: Phi_medner-obstetrics_gynecology_lora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Phi_medner-obstetrics_gynecology_lora This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 10 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu118 - Datasets 2.19.0 - Tokenizers 0.19.1
Litzy619/G0515HMA17H
Litzy619
2024-05-15T22:09:54Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:google/gemma-2b", "base_model:finetune:google/gemma-2b", "license:gemma", "region:us" ]
null
2024-05-15T20:53:05Z
--- license: gemma base_model: google/gemma-2b tags: - generated_from_trainer model-index: - name: G0515HMA17H results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # G0515HMA17H This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1169 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 60 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.1545 | 0.09 | 10 | 2.8275 | | 2.3966 | 0.18 | 20 | 1.7564 | | 1.191 | 0.27 | 30 | 0.5236 | | 0.2859 | 0.36 | 40 | 0.1764 | | 0.1586 | 0.45 | 50 | 0.1535 | | 0.1514 | 0.54 | 60 | 0.1510 | | 0.1514 | 0.63 | 70 | 0.1501 | | 0.1514 | 0.73 | 80 | 0.1503 | | 0.1426 | 0.82 | 90 | 0.1491 | | 0.1455 | 0.91 | 100 | 0.1498 | | 0.1492 | 1.0 | 110 | 0.1495 | | 0.1447 | 1.09 | 120 | 0.1484 | | 0.145 | 1.18 | 130 | 0.1469 | | 0.1457 | 1.27 | 140 | 0.1455 | | 0.1474 | 1.36 | 150 | 0.1475 | | 0.1415 | 1.45 | 160 | 0.1463 | | 0.1417 | 1.54 | 170 | 0.1426 | | 0.141 | 1.63 | 180 | 0.1400 | | 0.1393 | 1.72 | 190 | 0.1394 | | 0.1371 | 1.81 | 200 | 0.1332 | | 0.1332 | 1.9 | 210 | 0.1314 | | 0.1313 | 1.99 | 220 | 0.1248 | | 0.1249 | 2.08 | 230 | 0.1261 | | 0.1242 | 2.18 | 240 | 0.1266 | | 0.1227 | 2.27 | 250 | 0.1227 | | 0.1231 | 2.36 | 260 | 0.1213 | | 0.1232 | 2.45 | 270 | 0.1218 | | 0.1169 | 2.54 | 280 | 0.1200 | | 0.1143 | 2.63 | 290 | 0.1182 | | 0.1119 | 2.72 | 300 | 0.1174 | | 0.118 | 2.81 | 310 | 0.1170 | | 0.119 | 2.9 | 320 | 0.1169 | | 0.1191 | 2.99 | 330 | 0.1169 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.0
juancopi81/Phi3-math-correct-incorrect
juancopi81
2024-05-15T22:09:14Z
9
1
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-15T22:05:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hmmamalrjoub/16-5-2024-model
hmmamalrjoub
2024-05-15T22:05:58Z
164
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "ar", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-15T21:38:06Z
--- language: - ar pipeline_tag: text-classification ---
BigHuggyD/jukofyork_Deep-Miqu-120B-6.0bpw-h8-exl2
BigHuggyD
2024-05-15T22:05:20Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "6-bit", "exl2", "region:us" ]
text-generation
2024-05-14T14:58:30Z
--- base_model: [] library_name: transformers tags: - mergekit - merge license: other --- ![Deep-Miqu-120B.png](Deep-Miqu-120B.png) 6.0bpw EXL2 quant of [jukofyork/Deep-Miqu-120B](https://huggingface.co/jukofyork/Deep-Miqu-120B) A creative writing `120b` parameter "frankenmerge" model with 32k context. # Model background Created using [Mergekit](https://github.com/arcee-ai/mergekit) from my two `70b` parameter miqu-based models: [Dark-Miqu-70B](https://huggingface.co/jukofyork/Dark-Miqu-70B) and [Dawn-Miqu-70B](https://huggingface.co/jukofyork/Dawn-Miqu-70B). - To fix problems with "backwards time skips" in the generated stories, the "standard" interleave pattern was replaced by repeated blocks (see [here](https://github.com/arcee-ai/mergekit/issues/198#issuecomment-2081174251)). - To help maintain cohesion, the '`q_proj`', '`k_proj`' and '`down_proj`' tensors were all scaled to hypothesised upper-bound values (see [here](https://github.com/arcee-ai/mergekit/issues/198#issuecomment-2063716974)). # Prompting format Vicuna format is preferred: ``` USER: {prompt} ASSISTANT: ``` Mistral and Alpaca formats are also supported: ``` [INST] {prompt} [/INST] ``` ``` ### Instruction: {prompt} ### Response: ``` # Licence and usage restrictions [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) is a dequantized version of the [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) model leaked from MistralAI. All miqu-derived models, including this merge, are suitable for non-commercial, personal use only. # Mergekit configuration The following YAML configuration was used to produce this model: ```yaml const_tag: &MODEL1 jukofyork/dawn-miqu-70b const_tag: &MODEL2 jukofyork/dark-miqu-70b const_tag: &QK_ATTENUATION_FACTOR 0.8408964153 # sqrt(sqrt(1/2)) const_tag: &MLP_DOWN_SCALE_FACTOR 0.7071067812 # sqrt(1/2) scale-filter-env: &scale_filter_env parameters: scale: - filter: q_proj value: *QK_ATTENUATION_FACTOR - filter: k_proj value: *QK_ATTENUATION_FACTOR - filter: down_proj value: *MLP_DOWN_SCALE_FACTOR - value: 1.0 slices: - sources: - model: *MODEL1 layer_range: [0, 10] - sources: - model: *MODEL1 layer_range: [10, 20] <<: *scale_filter_env - sources: - model: *MODEL2 layer_range: [10, 20] <<: *scale_filter_env - sources: - model: *MODEL1 layer_range: [20, 30] <<: *scale_filter_env - sources: - model: *MODEL2 layer_range: [20, 30] <<: *scale_filter_env - sources: - model: *MODEL1 layer_range: [30, 40] <<: *scale_filter_env - sources: - model: *MODEL2 layer_range: [30, 40] <<: *scale_filter_env - sources: - model: *MODEL1 layer_range: [40, 50] <<: *scale_filter_env - sources: - model: *MODEL2 layer_range: [40, 50] <<: *scale_filter_env - sources: - model: *MODEL1 layer_range: [50, 60] <<: *scale_filter_env - sources: - model: *MODEL2 layer_range: [50, 60] <<: *scale_filter_env - sources: - model: *MODEL1 layer_range: [60, 70] <<: *scale_filter_env - sources: - model: *MODEL2 layer_range: [60, 70] <<: *scale_filter_env - sources: - model: *MODEL1 layer_range: [70, 80] merge_method: passthrough dtype: float16 ``` ## Key configuration details: - '`merge_method: passthrough`' passes input tensors through unmodified. - '`filter`' selects the required tensor(s) based on their name(s). - '`scale`' scales the weights in the select tensors. See the [Mergekit documentation](https://github.com/arcee-ai/mergekit) for more on these settings. # Example stories The following mix of "dark" stories were generated using the Vicuna prompt format with no system message and temperature=0: ## Dark fantasy stories <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a 'Grimdark' stories trilogy in the style of Joe Abercrombie and Rob J Hayes ### ASSISTANT Chapter One: The Crimson Feast The world was a festering wound, and the city of Dreghast its rotten heart. Nestled in the crook of the Cursed River, it throbbed with the pulsating rhythm of vice and depravity. The stench of decay and desperation hung heavy in the air, a miasma that clung to the soul like a lover's final embrace. In the bowels of this putrid metropolis, where the sun's feeble rays dared not tread, a man named Riven Crow sat hunched over a rickety table, his one good eye fixed on the cards in his scarred hands. The other eye, a milky orb of worthlessness, stared blankly at the grimy floorboards, a constant reminder of the price of his many sins. The tavern, if one could generously bestow such a title upon the hovel, was a cacophony of drunken laughter, clattering tankards, and the occasional scream of pain or pleasure. The patrons were a motley crew of cutthroats, whores, and other miscreants who had slithered from the city's filthy underbelly like maggots drawn to a rotting corpse. Riven fit in perfectly. His ragged cloak, stained with the blood of countless foes, concealed a lithe frame honed by years of hardship and a life spent dancing on the razor's edge. Beneath the grime and the tangled mass of black hair that framed his gaunt features, sharp cheekbones and a cruel mouth spoke of a man who had known nothing but suffering and had learned to relish in dishing it out in equal measure. Across the table, his opponent, a burly brute with a face that looked as if it had been carved by a drunken butcher, sneered, revealing a row of yellowed teeth. "I'm tired of yer games, cripple," he growled, his meaty fists clenching around the hilt of his dagger. "Show yer hand or I'll carve it from yer twitchin' wrist." Riven's lips curled into a sardonic smirk, a twisted mockery of a smile. "Patience, my friend," he drawled, his voice a raspy whisper that carried the weight of a thousand untold horrors. "The best part of the dance is always the moment before the blade finds its mark." The tension in the room thickened, the air crackling with the promise of imminent violence. The other patrons, wise enough to know better, scurried away like rats sensing the approach of a predator. With a fluid grace that belied his disheveled appearance, Riven flipped his cards onto the table, revealing a hand that would have made even the most seasoned gambler pale. The Three of Knives, the Queen of Serpents, and the Ace of Graves - the infamous Crimson Feast. The ultimate trump in the deadly game of Shadows and Bones. The brute's eyes widened in disbelief, then narrowed to slits as he roared a curse that would have made even the most hardened sailor blush. In a blur of motion, he lunged across the table, his dagger glinting in the dim candlelight. But Riven was faster. Much faster. His left hand, adorned with a collection of crude silver rings, snaked out like a viper, catching the brute's wrist in an iron grip. With a sickening crunch, he twisted the man's arm, eliciting a howl of agony that was abruptly silenced as his hidden dagger, concealed in his sleeve, found its home in the base of his throat. As the brute's lifeless form slumped forward, Riven casually collected his winnings - a meager pile of copper coins and a tarnished amulet depicting a snarling wolf - and downed the dregs of his ale. "Another round, wench," he called out to the buxom barmaid, who eyed him warily from the safety of the counter. She knew better than to cross a man who wore death like a second skin. As the night wore on, Riven's fortunes continued to rise, and the bodies piled higher. He was a predator in a sea of prey, a shark among minnows, and he reveled in the sweet taste of their fear. But even sharks had their predators. A shadowy figure, cloaked in darkness and reeking of decay, slid into the vacant seat across from him. Riven's instincts, honed by years on the streets, screamed a warning, but it was too late. "Leave now, mortal," the figure rasped, its voice like the rustle of ancient parchment. "You've stolen what isn't yours, and the price is steeper than your pitiful life." Riven's hand instinctively went for his dagger, but found only empty air. He glanced down to see the hilt protruding from the table, the blade buried deep in the wood mere inches from his palm. "Clever," he sneered, undeterred. "But I've faced worse than the likes of you." The figure chuckled, a sound that sent shivers down Riven's spine. "You have no idea, little fly. You've meddled in affairs beyond your comprehension, and now you shall pay the price." With a flick of its skeletal wrist, the figure produced a deck of tarot-like cards, each one etched with intricate symbols that seemed to writhe and pulse with malevolent energy. "Let us see what the cards have in store for you, shall we?" As the first card flipped over, Riven's world plunged into darkness, and he knew, with a cold, gut-wrenching certainty, that his night was about to get a whole lot worse. Welcome to the beginning of 'The Crimson Trilogy', a grimdark saga of betrayal, vengeance, and the thin line between hero and villain. In the style of Joe Abercrombie and Rob J Hayes, prepare to delve into a world where the only law is survival, and the only currency is blood. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a cultist who has just escaped his cult. He is dressed in dirty yellow robes and his only possession is a mysterious small (magical!?) mirror he stole from the cult. The story starts with him arriving at an apparently deserted ghost town on the edge of a desert. He has an arrow lodged in his shoulder and is losing his mind due to infection and thirst. ### ASSISTANT The sun was a merciless tyrant, its scorching gaze searing through the tattered remnants of the cultist's once-yellow robes. He staggered into the desolate husk of a town, a forgotten relic on the ragged hem of the Great Sandsea. Each step sent jolts of agony coursing through his body, a symphony of torment orchestrated by the unwelcome guest lodged in his shoulder - a cruel reminder of his recent betrayal. His name, if it ever held any meaning, had been scoured from his parched lips by the relentless desert winds. Now, he was simply the Forsaken One, a title he wore with a twisted sense of pride. The cult's whispers echoed in his fevered mind, their venomous words a constant companion on his treacherous pilgrimage. "You'll never escape our grasp, heretic," they hissed, their spectral voices a chorus of malice. "The desert will claim you, and the sands will swallow your soul." He spat a mouthful of blood-flecked spittle at the unyielding earth, a defiant retort to the phantoms that haunted him. "I've already danced with the void's embrace," he growled, his voice a hoarse rasp. "You're too late for the feast." The Forsaken One's vision swam, the world around him blurring into a kaleidoscope of shimmering mirages. He clutched at the small, intricate mirror hidden within his robes, its cool touch a lifeline in the sea of madness threatening to consume him. The relic, stolen from the very heart of the cult's twisted sanctuary, was his only hope for redemption or perhaps, a swift end. The ghost town's skeletal buildings loomed like the decayed teeth of some ancient beast, their vacant windows staring down at him with accusatory silence. He stumbled towards the largest structure, a crumbling edifice that might have once been a tavern or an inn. Anything that could offer respite from the relentless sun and the gnawing claws of thirst. As he collapsed against the weathered wooden door, it groaned in protest, reluctantly yielding to his feeble strength. The interior was a mausoleum of dust and decay, the air thick with the stench of death and abandonment. He didn't care. Anywhere was better than the merciless embrace of the sun-scorched wastes. The cultist slumped to the floor, his back sliding against the rough-hewn wall. He fumbled with trembling hands, pulling the hood of his robes back to reveal a face etched with lines of pain and determination. Sunken eyes, the color of storm-tossed seas, scanned the room, searching for any sign of life or, gods forbid, death. Finding none, he allowed himself a moment of respite, his chest heaving with ragged breaths. The arrowhead protruded from his shoulder, a mocking reminder of his folly. He knew he needed to remove it, but the thought of the agony that would follow sent shivers down his spine. "Better the devil you know," he muttered to himself, his voice barely audible above the pounding of his heart. His fingers brushed against the mirror's smooth surface, and a flicker of desperate hope ignited within him. The object pulsed with an otherworldly energy, a beacon in the darkness of his despair. He had seen what it could do in the hands of the cult's twisted leaders - miracles and horrors alike. Perhaps it held the key to his salvation or, at the very least, a reprieve from the agony that gnawed at his soul. With a resigned sigh, he drew the mirror from its concealment, the light catching its intricate designs and casting eerie patterns across the room. The surface rippled like a pool of ink, and for a fleeting moment, he swore he saw a pair of eyes staring back at him, filled with a hunger that transcended mortal understanding. "Please," he whispered, his voice cracking with desperation. "Show me a way out of this hell." The mirror's surface swirled, and the darkness within it deepened, as if the very fabric of reality was being torn asunder. A voice, ancient and malevolent, whispered from its depths, "A bargain, mortal? Your soul for a sip of sweet relief?" The Forsaken One hesitated, the weight of his decision heavy on his shoulders. He had already betrayed his faith once. What was one more transgression against the divine tapestry? "Anything," he croaked, his voice barely more than a whisper. "Just make it stop." As the mirror's darkness enveloped him, the cultist's last coherent thought was a fleeting regret for the man he once was, and the monster he was about to become. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a 'Mercer' who's job is to put the mortally wounded out of their mystery after a battle. He has no conscience about what he does. The story starts the morning after the battle and the moor where it took place is blanketed in mist. Something stirs a flashback in the POV character. ### ASSISTANT The morning after the slaughter, the moor was a tapestry of steel and sinew, woven through with the crimson threads of life's final stitches. The mist hung heavy, like a shroud drawn over the sins of the night, obscuring the sun's feeble attempts to cast judgement on the carnage below. It was in this morbid twilight that the Mercer, known only as Crowsong, went about his grim work. He moved with the grace of a predator, his boots barely disturbing the sodden earth as he wove through the tangled mass of the fallen. His eyes, cold and unfeeling, scanned the battlefield, seeking out the twitching forms that still clung to the frayed edges of existence. His hands, calloused and stained, gripped the hilt of his blade, a cruel instrument of mercy, as he approached his next charge. The man, or what was left of him, lay on his back, his chest a gaping maw that spewed forth the remnants of his insides like a grotesque offering to the gods of war. His eyes, wide and glassy, stared up at the swirling mists above, seeing nothing but the cold embrace of the void. Crowsong knelt beside him, the wetness of the ground seeping through his trousers, unnoticed and uncared for. "Shh," he whispered, his voice a soothing rasp, like the rustle of autumn leaves. "It's over now. Time to rest." The dying man's lips trembled, a soundless plea for salvation or perhaps just a final curse, but Crowsong paid it no heed. He had heard them all before, the desperate cries of the damned, and they held no sway over him. His duty was clear, and he would see it through to the bitter end. As he raised his blade, the man's hand, a twisted claw of bone and ruined flesh, latched onto his wrist, the grip surprisingly strong for one so close to the abyss. Crowsong's eyes narrowed, a flicker of irritation dancing across his otherwise impassive features. "Please," the man croaked, his voice a wet rasp. "Not like this. Not in the dark." Crowsong paused, the blade hovering mere inches from the man's throat. It was a sentiment he had heard countless times before, and one he had never understood. Death, he had long ago decided, was the great equalizer. It cared not for the light or the dark, the rich or the poor, the brave or the cowardly. It came for all in the end, and it was his job to usher them into its cold embrace. But something about this man's plea, the raw terror in his eyes, stirred something deep within him, a memory long buried beneath the layers of blood and gore that coated his soul. He saw himself, a boy of no more than ten summers, cowering in the corner of a burning hovel, his mother's lifeless form crushing him beneath her. He heard the screams of his village, the clash of steel, the cries of the dying. And he felt the cold steel of a mercenary's blade, the pain searing through his body like a thousand white-hot brands. And then, the darkness. Crowsong shook his head, banishing the unwelcome memories back to the recesses of his mind. He could not afford to be swayed by sentiment. Not here, not now. He was a Mercer, a harbinger of the end, and his duty was clear. With a final, almost apologetic sigh, he drove his blade through the man's throat, severing his windpipe and silencing his pleas forever. The grip on his wrist went slack, and the light in the man's eyes faded, replaced by the vacant stare of the truly departed. He wiped the blood from his blade on the man's tattered tunic and rose to his feet, the weight of another soul added to the countless others that already weighed heavy on his conscience. Or at least, it would have, if he still possessed such a thing. As he continued his grim harvest, the mist began to lift, revealing the full extent of the carnage that had taken place on this cursed moor. Bodies lay strewn as far as the eye could see, a sea of broken dreams and shattered lives, all sacrificed on the altar of ambition and greed. Crowsong surveyed the scene with a detached indifference, his heart as cold and lifeless as the corpses that surrounded him. He had seen worse, much worse, in his many years plying his trade. But something about this battle, this place, gnawed at the edges of his mind, like a persistent itch that refused to be scratched. He shook off the unease, attributing it to the fatigue that clung to him like a second skin. He had been at it for hours, and even the most hardened of souls needed rest. As he sheathed his blade and turned to leave, a glint of metal caught his eye, half-buried in the mud a few paces away. He approached, curiosity piquing his interest, and knelt to retrieve it. It was a locket, simple in design but well-crafted, its once-polished surface now marred by the filth of the battlefield. He opened it, revealing a small, faded portrait of a woman and a child, their faces forever frozen in time, a testament to a love that had been cruelly snuffed out on this very ground. For a moment, Crowsong hesitated, the ghost of a smile playing at the corners of his lips. He knew all too well the pain of losing those you loved, the hollow ache that never truly went away. But then, as quickly as it had come, the emotion was gone, replaced by the cold, hard resolve that had carried him through countless battles and even more grisly aftermaths. He pocketed the locket, adding it to the collection that jangled at his belt, a grim reminder of the lives he had taken and the debts he would never be able to repay. As he walked away, the mist began to close in once more, swallowing the battlefield and its secrets whole, as if the very earth itself sought to forget the atrocities that had been committed upon it. But Crowsong knew better. The dead, they never truly rested. Not while there were still stories to be told, and debts to be paid. And he, the Mercer of the Mists, would see to it that their voices were heard, even if it cost him his own damned soul. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a man awaiting execution for a crime he didn't actually commit. His cell mate is a much larger man, who is a notorious murderer and rapist. One evening they overhear the guards talking about some "... threat from the North" but don't catch the whole conversation. The next morning they awake to find the guards have all fled and now face a new predicament: how to get out of the locked cell before they die of thirst or starvation. ### ASSISTANT The stench of piss and despair clung to the damp stone walls of the cell like a lover's embrace, a constant reminder of the futility of hope. The flickering torchlight danced across the grimy bars, casting grotesque shadows that seemed to mock the two occupants. In one corner, huddled against the cold, sat a man who had once been known as Thane, the Butcher's Bane. Now, he was just another piece of filth awaiting the noose's embrace. "You hear that, you maggot?" rasped a voice from the opposite side of the cell. Thane didn't need to look up to know it was his cellmate, Grimgor, the Mountain's Bane. A fitting name for a man who could crush a man's skull between his meaty thighs. "Sounded like they're scared of somethin'." Thane grunted, feigning disinterest. He'd learned early on that the less he spoke, the less likely Grimgor was to take an interest in him. Besides, what did it matter? He was innocent, but in this gods-forsaken hole, innocence was as valuable as a eunuch's cock. "I said, you hear that?" Grimgor growled, his voice rumbling like distant thunder. "Aye, I heard," Thane muttered, his voice raspy from disuse. "Probably just rats scurrying in the walls." Grimgor snorted, a sound that would have been amusing under different circumstances. "Rats don't sound like men pissin' themselves, boy." Thane sighed and leaned his head against the cold stone. He'd been in this cell for what felt like an eternity, accused of a crime he hadn't committed. The irony was bitter as the rotgut they served for ale. He, a man who'd spent his life meting out justice, now awaiting his own unjust end. "You think they'll come for us?" Grimgor asked, his tone surprisingly soft. "Who?" Thane asked, curiosity getting the better of him. "The guards," Grimgor clarified. "They've not brought us food or water since last night. Somethin' ain't right." Thane considered this for a moment. It was true, the usual clanging of keys and gruff voices had been conspicuously absent. He'd chalked it up to the guards being drunk on their own supply, but the longer the silence stretched, the more uneasy he became. "I don't know," he finally admitted. "But if they don't, we'll be dead before the week's end." Grimgor grunted in agreement, the silence settling between them like a heavy shroud. Thane closed his eyes, trying to block out the stench and the gnawing hunger in his belly. He drifted off into a fitful sleep, plagued by dreams of his past glories and the betrayal that had led him here. He awoke to the sound of metal scraping against stone. His eyes snapped open, heart pounding in his chest. The cell was still shrouded in darkness, but the usual sounds of the dungeon were conspicuously absent. No distant laughter, no clanging of chains, no moans of the damned. Just silence. "Grimgor," he whispered, nudging the behemoth beside him. "Wake up." Grimgor grunted and stirred, his massive frame shifting like a slumbering bear. "What is it?" "Listen," Thane hissed, straining his ears. For a moment, there was nothing but the sound of their ragged breathing. Then, faintly, they heard it. The unmistakable sound of footsteps, followed by the jingling of keys. "About bloody time," Grimgor growled, sitting up with a groan. "I'm so hungry I could eat a horse's arse." Thane's stomach twisted. He'd heard that tone before, just before Grimgor had turned on one of their fellow inmates during a particularly lean stretch. He'd managed to avoid being on the receiving end of it so far, but he knew his luck wouldn't hold out forever. The footsteps grew louder, accompanied by the scrape of a torch being lit. Thane's heart hammered in his chest as the flickering light illuminated the cell door. But instead of the familiar sneer of their usual guard, they were greeted by the sight of an empty corridor. "What in the seven hells?" Grimgor muttered, confusion etched on his brutish features. Thane scrambled to his feet, his chains rattling like a warning bell. "They've left us," he breathed, disbelief lacing his voice. "Left us?" Grimgor repeated, as if the words were foreign. "Why would they do that?" "I don't know," Thane said, his mind racing. "But we're not going to find out sitting here." He crossed the cell in three quick strides, grabbing the rusty bars and rattling them with all his strength. They held fast, as they had a thousand times before. "Damn it!" he cursed, slamming his fist against the cold metal. "Easy, boy," Grimgor rumbled, lumbering to his feet. "You'll just break your pretty little hands." Thane glared at him, but the truth of his words stung like a slap to the face. He took a deep breath, trying to calm his racing heart. They needed to think, not panic. "We need to find a way out," he said, his mind working furiously. "There has to be another way." Grimgor grunted in agreement and began pacing the cell, his massive frame making the very stones tremble. Thane watched him, his own thoughts racing. They were trapped like rats in a cage, with no way out and no idea what awaited them on the other side. Suddenly, a memory surfaced, as clear as if it had happened yesterday. The night before, he'd overheard the guards talking about some "threat from the North." They'd been too drunk to care who was listening, their voices carrying through the thin stone walls. He'd dismissed it as drunken ramblings at the time, but now... "Grimgor," he said, turning to his cellmate. "Do you remember the guards talking about something last night?" The big man paused, scratching his greasy head. "Might've heard somethin' 'bout the North, why?" "What did they say?" Thane pressed, his heart quickening. Grimgor frowned, his brow furrowing in concentration. "Somethin' 'bout the walls not holdin', I think. And... and somethin' 'bout the dead walkin'." A chill ran down Thane's spine. He'd heard tales of such things, whispered around campfires and in the darkest corners of taverns. But he'd always dismissed them as the ramblings of madmen and drunkards. "Dead men don't walk," he muttered, more to himself than to Grimgor. "Maybe not where you're from," Grimgor said, his voice laced with a hint of menace. "But where I'm from, you learn to believe in the unbelievable." Thane swallowed the lump in his throat. He didn't want to believe it, but the evidence was mounting. The deserted dungeon, the missing guards, the eerie silence... it all pointed to one thing. "We have to get out of here," he said, urgency lacing his voice. "Now." Grimgor nodded, his eyes gleaming in the torchlight. "Aye, I'm with you, lad. But how?" Thane looked around the cell, his gaze settling on the rusted iron bars that held them captive. An idea began to form in his mind, as mad as it was desperate. "We'll have to break through," he said, his voice barely above a whisper. Grimgor barked a laugh that echoed off the stone walls. "You're mad, boy. These bars are thicker than my bloody wrists!" "I know," Thane said, a grim smile twisting his lips. "But desperate times call for desperate measures." He began to strip off his tattered tunic, revealing the lean muscles that still clung to his frame despite the meager rations. He'd always been a fighter, even before he'd taken up the mantle of the Butcher's Bane. He'd faced worse odds and come out alive. He would do so again. "What're you doin'?" Grimgor asked, confusion etched on his face. "Give me your belt," Thane said, holding out a hand. Grimgor hesitated for a moment, then shrugged and complied. Thane quickly fashioned the leather into a makeshift harness, looping it around his wrists and knotting it tight. "What's this for?" Grimgor asked, eyeing the makeshift contraption with suspicion. "You're about to find out," Thane said, a glint of determination in his eyes. "Get ready to pull." He positioned himself in front of the bars, feet braced against the wall, and took a deep breath. Then, with a roar that echoed through the empty dungeon, he threw his entire weight against the metal. The bars groaned and shifted, but held fast. Thane gritted his teeth and tried again, ignoring the searing pain that shot up his arms. On the third try, he felt a tiny give. "That's it!" he grunted, sweat beading on his brow. "Pull!" Grimgor braced himself against the far wall and pulled with all his considerable strength. The bars groaned louder, the sound like nails on a chalkboard. Thane could feel the muscles in his arms and back straining, on the verge of snapping. "Again!" he gasped, his vision swimming. With a deafening crack, the bars gave way, sending Thane sprawling to the floor. He lay there for a moment, panting and gasping for air, as Grimgor stared down at him in disbelief. "By the gods," the big man breathed. "I didn't think you had it in you." Thane managed a weak smile, wincing as he sat up. "Neither did I." They worked in silence for what felt like hours, widening the gap in the bars enough for Thane to squeeze through. He paused at the opening, heart pounding in his chest. Freedom was so close he could taste it, but what awaited them on the other side was anyone's guess. "You first," Grimgor said, nudging him gently. "I'll follow." Thane nodded, his heart heavy with gratitude. He'd never thought he'd owe his life to a man like Grimgor, but in this twisted world, allies came in the strangest of forms. He squeezed through the gap, wincing as the rough stone scraped against his skin. He landed in a crouch on the other side, his muscles protesting the sudden movement. The corridor was empty, the flickering torches casting long, dancing shadows on the damp stone. "Come on," he hissed, beckoning to Grimgor. The big man followed, his bulk barely fitting through the makeshift opening. They crept down the corridor, their footsteps echoing like thunder in the oppressive silence. Thane's hand rested on the hilt of his dagger, a comforting weight against his thigh. He'd managed to keep it hidden during his capture, a small act of defiance that might just save their lives. They rounded a corner and froze. The sight that greeted them was straight out of a nightmare. Bodies lay strewn across the floor, limbs twisted at unnatural angles, their faces twisted in expressions of pure terror. Thick pools of blood stained the flagstones, the coppery scent making Thane's stomach churn. "What in the...?" Grimgor breathed, his voice barely audible. Thane didn't answer. He knew what had done this. He'd seen the like before, during the war. The Northmen were known for their brutality, but even they wouldn't leave a scene like this. "We need to find a way out," he whispered, his voice hoarse. "And fast." They moved cautiously through the carnage, avoiding the pools of blood as best they could. Thane's senses were on high alert, every creak and groan sending a shiver down his spine. They were being hunted, he could feel it. They reached the end of the corridor and found themselves facing a heavy wooden door, the only way out of the dungeon. Thane tried the handle, but it was locked tight. "Damn it!" he cursed, slamming his fist against the wood. "Stand back," Grimgor growled, stepping forward. He took a deep breath and charged, shoulder first. The door splintered on impact, sending splinters flying. They stumbled into the courtyard, blinking against the sudden onslaught of sunlight. The sight that greeted them was even worse than the carnage below. The fortress was in ruins, bodies littering the ground like discarded dolls. And in the distance, a dark cloud of smoke rose into the sky, obscuring the sun. "What in the seven hells happened here?" Grimgor rasped, his voice barely audible over the wind. Thane didn't have an answer. All he knew was that they needed to get as far away from this place as possible. "Come on," he said, grabbing Grimgor's arm. "We'll figure it out later. For now, we need to find horses and get out of here." They sprinted across the courtyard, their footsteps echoing off the empty walls. As they rounded the corner of the keep, Thane skidded to a halt, his heart lodging in his throat. Standing before them, blocking their path to freedom, was a sight that would haunt his dreams for the rest of his days. A horde of the undead, their rotting flesh hanging from their bones, shambled towards them, moaning with a hunger that could never be sated. And at their head, a figure cloaked in shadow, its eyes burning like twin embers of hate. "Looks like our luck just ran out," Grimgor muttered, drawing his crude shiv. Thane unsheathed his dagger, his hands trembling. He'd faced death many times before, but never like this. "I suppose it was bound to happen eventually," he said, a grim smile twisting his lips. "Let's make them remember the names of Thane and Grimgor." Together, they charged into the waiting jaws of the undead, their battle cries echoing through the ruins of what once was. </details> ## Dark sci-fi stories <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Space Opera Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The setting is this: a colony ship heading out of the solar system has been pulled off course with no hope of getting to the planned destination. The people know this and society has broken down completely due to the grim prospects they are facing. The POV character for chapter 1 is an orphan. ### ASSISTANT Chapter 1: Adrift in the Void The cold, unforgiving vacuum of space pressed against the hull of the colony ship, *Pandora's Promise*, like an unyielding fist. Inside, in the dimly lit corridors of the once-thriving vessel, the echoes of laughter and hope had long since been replaced by the mournful symphony of groaning metal and the occasional, haunting scream. Kira, a lean and resourceful seventeen-year-old orphan, moved silently through the labyrinthine passages, her ragged boots barely making a sound on the grime-coated floor. Her emerald eyes, sharp as a hawk's, scanned the shadows for any signs of danger. The once-pristine white walls were now a patchwork of rust and darkness, a testament to the slow decay that had claimed the ship since it had been unceremoniously yanked off course, like a toy snatched from a child's grasp by an unseen cosmic force. *How did it come to this?* she thought, her mind drifting back to the faded memories of her childhood, before the chaos. Before the food ran out. Before the adults turned on each other like ravenous beasts. The ship's artificial gravity had long since failed, leaving the inhabitants to cling to whatever handholds they could find. Kira had adapted quickly, her lithe form now as adept at navigating the zero-G environment as a fish in water. She reached out, her gloved hand finding purchase on a rusted pipe that jutted out from the wall. With a practiced grace, she swung herself around the corner, her makeshift knife, fashioned from a shard of broken glass and bound to a length of twisted wire, at the ready. The corridor ahead was clear, for now. She allowed herself a brief moment of respite, her chest heaving as she sucked in the stale, recycled air that tasted of despair and decay. The oxygen filters were failing, just like everything else on this floating tomb. A sudden clang reverberated through the ship, jolting her back to the present. Kira's heart leaped into her throat, her senses heightened to a razor's edge. She knew that sound all too well. Raiders. *Damn it all to the black*, she cursed silently, her thoughts laced with the profanity that had become second nature in this harsh new world. She ducked into a nearby maintenance alcove, her heart pounding like a war drum in her chest. The footsteps grew louder, accompanied by the harsh, guttural laughter of the marauders. They were getting closer. *Think, Kira, think!* her inner voice urged, a faint echo of her long-dead mother's soothing tone. Her eyes darted around the cramped space, searching for any means of escape. Above her, a grate in the ceiling panel caught her attention. It was a tight fit, but desperation was a powerful motivator. With a silent prayer to whatever gods might still be listening, she sprang upwards, her fingers digging into the rusted metal. She strained with all her strength, her muscles screaming in protest as she pulled herself up and through the narrow opening. The sound of boots scraping against the floor below grew louder, accompanied by the harsh clang of makeshift weapons being drawn. Kira held her breath, her heart threatening to burst from her chest, as the raiders passed beneath her. She could almost feel their fetid breath on her exposed neck. *Just a little further*, she willed herself, inching along the narrow crawlspace. Finally, she reached the end of the ventilation shaft and peered down into the relative safety of the next corridor. She let out a silent sigh of relief, her body trembling with the adrenaline coursing through her veins. As she prepared to drop down, a hand clamped around her ankle, yanking her back with a feral snarl. Kira's cry of surprise was muffled by a filthy rag being stuffed into her mouth. "Gotcha, little mouse," a gravelly voice growled in her ear, the hot breath reeking of rot and decay. She thrashed and struggled, but it was no use. The raider's grip was like iron, his other hand now wrapped around her throat, cutting off her air supply. Dark spots danced before her eyes as she fought against the encroaching darkness. Just when she thought all hope was lost, a sudden, blinding flash of light illuminated the cramped space, followed by a sickening wet thud. The vice-like grip around her neck and ankle released, and she tumbled to the floor below, gasping for air. Coughing and wheezing, Kira looked up to see a hooded figure standing over her, a crackling energy weapon in hand. The stranger's face was obscured by a breather mask, but their eyes, a piercing shade of ice blue, seemed to bore into her very soul. "Come on," the figure growled, extending a gloved hand. "You're not safe here." Without hesitation, Kira took the offered hand, her heart pounding in her chest. She had no idea who this mysterious savior was, or what their motives might be, but one thing was certain: in the cold, unforgiving void of space, even the devil himself could be an angel. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is an astronaut living on a space station orbiting the earth. The day starts off normally, but then him the others notice the mushroom clouds and they have to face inevitability of the situation they are in. ### ASSISTANT Chapter 1: Ashes to Oblivion The cold, unforgiving vacuum of space pressed against the reinforced glass of the observation deck like an unseen titan, its icy tendrils reaching out to claim the fragile bubble of life that was the Kepler-92b Orbital Station. Inside, however, the air was a cocoon of warmth and artificially generated oxygen, a testament to humanity's insatiable desire to conquer the stars. Commander Jameson "Jim" Harrow floated lazily in front of the window, his coffee cup held securely in the magnetic grip of the console beside him. The Earth, a blue and green marble, hung suspended in the inky void like a cosmic jewel, its beauty marred only by the thin, delicate veil of atmosphere that separated life from the cold emptiness of the void. "Another day in paradise," he muttered to himself, the words barely audible over the hum of the station's life support systems. Jim had been stationed on the Kepler-92b for the better part of two years, and the view still hadn't lost its ability to take his breath away. He'd seen the birth of countless stars, the death of ancient galaxies, and the dance of cosmic dust that made up the very fabric of the universe. But there was something about the sight of his home planet, so small and insignificant in the grand scheme of things, that never failed to humble him. His reverie was interrupted by the sudden jolt of the station's alarms, a shrill, piercing wail that cut through the silence like a knife. "What the hell?" he cursed, his heart rate spiking as he quickly unclipped himself from his seat and propelled himself towards the control console. The display in front of him was a chaotic mess of flashing red lights and scrolling text, warning of multiple system failures and hull breaches. "This can't be happening," he thought, his mind racing as he tried to make sense of the data in front of him. "Houston, this is Kepler-92b, do you copy? We've got multiple system failures and hull breaches on all decks. Request immediate assistance." Static was the only response. Jim's stomach dropped as the horrifying realization dawned on him. They were alone, cut off from Earth and any hope of rescue, millions of miles from the nearest outpost. "All hands, this is Commander Harrow. We have a situation. I repeat, we have a situation. All non-essential personnel, please report to the emergency shuttles immediately. We are initiating evacuation protocol Alpha-Delta-Bravo." The intercom crackled to life, but the only response was the deafening silence of the void. "Dammit," he cursed, his fingers flying over the console as he tried to manually override the system. In the mess hall, Dr. Sarah "Sally" Chen was enjoying her breakfast of reconstituted eggs and synthetic bacon when the alarms sounded. She looked up, her dark eyes wide with fear, as the other crew members around her scrambled to their feet, their trays of food forgotten. "What's happening?" one of the engineers, a young man named Thompson, asked, his voice trembling. "I don't know," Sally replied, her own voice betraying the same fear that gripped her heart. "But we need to get to the shuttles, now." She pushed herself off the table, her magnetic boots clinging to the floor as she made her way towards the emergency hatch. The others followed suit, their movements frantic and disorganized. As they made their way through the narrow corridors of the station, the gravity generators began to fail, sending them careening into walls and each other. Sally felt her stomach lurch as she was thrown against the bulkhead, her helmet slamming into the metal with a sickening crunch. "Sally!" Thompson's voice cut through the chaos, his face a mask of concern as he pulled her to her feet. "I'm fine," she lied, wincing as she rubbed her temple. "Let's go." They continued on, their progress slowed by the weightlessness and the debris that now floated freely through the corridors. Meanwhile, in the station's control room, Jim was fighting a losing battle. The damage was too extensive, and the station's systems were failing faster than he could repair them. "Houston, do you copy? This is Kepler-92b, we are losing control. Repeat, we are losing control." Still, there was only silence. He closed his eyes, his heart heavy with the weight of the decision he knew he had to make. "Sally, if you can hear me, I'm sorry. I'm so sorry." With a heavy heart, he initiated the station's self-destruct sequence, setting a course that would hopefully send the crippled vessel into the sun's unforgiving embrace. "May God have mercy on our souls," he whispered, as the countdown began. Back in the mess hall, Sally and the others had managed to make it to the emergency airlock. She could see the shuttles, their engines glowing a comforting red, just on the other side of the rapidly depressurizing chamber. "Go!" she shouted, pushing Thompson and the others ahead of her. "I'll hold them off!" "No, Sally, you can't!" he protested, but she shook her head, her resolve firm. "I'm not leaving without you," she said, her voice steady despite the fear that threatened to consume her. "I'll buy you some time," she promised, drawing her sidearm from its holster. "Just go." Thompson hesitated for a moment, his eyes filled with tears, before nodding and disappearing through the hatch. Sally took a deep breath, her heart pounding in her chest, as she turned to face the oncoming wave of debris. "For Earth," she whispered, as the airlock slammed shut behind her. In the control room, Jim watched as the hull breaches spread like a cancer, consuming the station one section at a time. He could feel the temperature dropping, the air growing thinner, as the vacuum of space began to claim its prize. He closed his eyes, his thoughts drifting to his wife and daughter back on Earth, the faces of his crewmates flashing through his mind like a flickering slideshow. "I'm sorry," he whispered, as the cold embrace of oblivion finally claimed him. And then, there was nothing. As the Kepler-92b Orbital Station was consumed by the unforgiving void, the Earth below continued to spin, oblivious to the tragedy that had just unfolded above it. The mushroom clouds that had once signaled the end of an era now nothing more than a distant memory, lost to the annals of time and the cold, uncaring vastness of space. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a worker constructing a biodome when a terrible accident happens. Make the story very dark; no happy ending here... ### ASSISTANT Chapter 1: The Crimson Dusk The air was thick with the scent of molten steel and burnt ozone as Kaelen Voss, a third-generation terraformer, clung precariously to the skeletal frame of the colossal biodome. His magnetic boots barely held their grip on the slick surface, the sweat-soaked suit clinging to his wiry frame like a second skin. The unforgiving gaze of the dying sun, a distant ember in the ashen sky, cast long, menacing shadows across the barren landscape below. This was the edge of humanity's latest folly, the frontier of their insatiable hunger for survival. Kaelen's thoughts drifted to the faded holo-image of his wife and daughter, tucked away in the worn-out pocket of his utility belt. Their smiles, once vibrant and full of life, now seemed like a cruel mockery of the desolate world he called home. He knew the risks when he signed up for this mission, but the promise of a better life, of green pastures and azure skies, had been too alluring to resist. "Voss, you copy?" crackled the voice of his foreman, Jax, through the comms. "We're running behind schedule. Get your ass in gear!" "Copy that, Jax," Kaelen grunted, forcing a semblance of confidence into his voice. "Just sealing the last joint. Be down in a jiff." He activated the plasma welder, the brilliant blue-white arc dancing between the metal plates like a captive serpent. Sparks showered around him like a deadly rain, stinging his exposed visor. The hiss of metal on metal filled his ears, drowning out the deafening silence of the alien world. As he finished the final weld, a sudden tremor shook the entire structure, sending a jolt of terror down his spine. The ground beneath him convulsed, as if the very planet itself had awakened from a millennial slumber, angry at the audacity of these insignificant intruders. "What the hell was that?" Kaelen yelled, his heart pounding like a war drum in his chest. Static filled the comms, followed by panicked screams and garbled warnings. The biodome, the fragile lifeline for the thousands of souls huddled below, groaned ominously, its once-sturdy frame twisting and buckling like a child's toy. "Evacuate! Evacuate!" Jax's voice cut through the chaos, barely audible over the cacophony of destruction. But it was too late. With a deafening roar, the biodome's support struts snapped like brittle twigs, sending Kaelen hurtling into the crimson-tinged void. He watched in slow-motion horror as the world he knew disintegrated around him, a kaleidoscope of fire and debris swallowing everything in its path. His mind raced, a torrent of memories flooding his consciousness. The first time he held his newborn daughter, her tiny fingers curled around his calloused thumb. The taste of real strawberries, a luxury reserved for the elite back on Earth. The sound of his wife's laughter, like the melody of a long-forgotten song. And then, as the cold embrace of oblivion threatened to claim him, a single, haunting thought echoed through the recesses of his shattered psyche: "I'm sorry." --- Hours later, or perhaps it was days, Kaelen awoke to the sound of his own ragged breathing. His body was a symphony of agony, every nerve ending screaming in protest. He groaned, trying to move, but found himself pinned beneath a mountain of twisted metal and rubble. The once-familiar landscape had been reduced to an unrecognizable hellscape, the once-promising colony now a graveyard of shattered dreams and broken promises. He fumbled for his wrist-mounted comm, his trembling fingers brushing against something warm and sticky. Blood. Lots of it. "Mayday, this is Voss. Anyone... anyone out there?" he croaked, his voice barely a whisper. Silence. The realization hit him like a sledgehammer to the gut. He was alone. Trapped. And with his oxygen supply dwindling, he knew his time was running out. As the darkness closed in, Kaelen Voss, once a proud terraformer, accepted his fate. This was not the hero's death he had envisioned, nor the legacy he wanted to leave behind. But in the cold, unforgiving void of space, the universe cared little for the dreams of insignificant creatures. And so, as the last vestiges of life faded from his eyes, he whispered a final prayer to the stars above, a desperate plea for redemption and a chance to make things right. But the cosmos, as always, remained indifferent. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is manning a listening station in deep space. What he hears over the radio chills him to the bone. ### ASSISTANT Chapter 1: Whispers from the Abyss The void of deep space was a cacophony of silence, punctuated only by the rhythmic hum of the ancient listening station. The man, known only as Kael, sat hunched over the console, his eyes glazed over from hours of staring into the abyssal blackness that swallowed the feeble light of distant stars. His calloused fingers danced across the worn-out keys, coaxing out the faintest of whispers from the cosmic static. Kael had been stationed here for what felt like an eternity, his only companions the cold metal walls and the ever-present hum of the station's life support systems. He often wondered if anyone even remembered his existence, if his tireless vigilance was still deemed necessary by the faceless bureaucrats who had banished him to this forsaken outpost. But orders were orders, and so he listened. Suddenly, a blip on the monitor snapped him out of his reverie. A shiver ran down his spine as he leaned in closer, his heart pounding in his chest. It couldn't be... could it? He had heard the stories, of course, the whispers that circulated among the long-serving crew members. Tales of strange signals, garbled and indecipherable, that seemed to emanate from the very edge of known space. Most dismissed them as mere figments of overactive imaginations, the product of too much time spent alone in the cold emptiness. But Kael knew better. He had heard them before. His trembling hand reached for the headset, the cold metal pressing against his sweat-soaked skin. He took a deep breath, steeling himself for what was to come. "This is Outpost Alpha-Niner-Delta," he croaked into the microphone, his voice crackling with static. "Do you copy? Please repeat your last transmission." Silence. He exhaled a sigh of relief, chiding himself for his paranoia. It was probably just another damned cosmic anomaly, playing tricks on his tired mind. He was about to log the incident and return to his monotonous routine when it happened. A voice, barely audible at first, pierced the silence like a knife through the fabric of reality itself. It was unlike anything he had ever heard before, a chilling symphony of alien tones and frequencies that seemed to resonate deep within his very soul. "...not...alone..." The words were distorted, as if they had traveled through countless light-years of empty space, twisted and warped by forces beyond human comprehension. But there was no mistaking their chilling intent. Kael's blood ran cold as he realized the horrifying truth. They had found them. His hands shook uncontrollably as he fumbled with the controls, desperately trying to send out a distress signal. But it was too late. The station's systems began to malfunction, flickering erratically before plunging into darkness. The only light now came from the dying embers of the emergency backup, casting eerie shadows across the cramped room. The voice returned, louder this time, filling every corner of his mind with its otherworldly presence. "...we...coming..." Panic gripped him like a vice, squeezing the air from his lungs. He knew there was no escape, no refuge from whatever nightmarish entity had finally breached the fragile barrier between their world and the infinite void. As the last vestiges of light faded away, Kael's final thought was a prayer, a desperate plea to whatever gods may still be listening. "Please... let it be quick." But the darkness only seemed to grow colder, and the whispers continued to echo through the endless expanse of space, a chilling reminder that they were never truly alone. </details> ## Miscellaneous stories <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty post-apocalyptic novel in the style of Cormac McCarthy. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a technician in a nuclear bunker 18 months after the war. The 12 inhabitants of the bunker have been expecting to start to be able to leave soon, but it has just been discovered that at least where they are, salted cobalt bombs were used. It's clear now that either all 12 will die when the resources run out and their is barely enough for 6 to make it to 5 years and have any hope of leaving. After 18 months of calm everything breaks lose today. ### ASSISTANT In the bowels of the earth, where the last vestiges of humanity clung to a semblance of life, the technician sat hunched over his console, the green glow casting eerie shadows upon his gaunt visage. The hum of machinery, once a comforting lullaby, now grated on his frayed nerves like the wailing of a thousand banshees. The air was stale, thick with the stench of sweat and despair, a constant reminder of their entombment. He had been tasked with the Sisyphean duty of monitoring the Geiger counter, a futile exercise in hope masquerading as purpose. The needle had remained stubbornly pinned in the red for as long as he could remember, a crimson sentinel of their impending doom. Yet, he persisted, driven by the faintest ember of optimism that one day, by some miracle, it would dip into the safe zone, and they could emerge from their subterranean prison. But today, that ember was snuffed out. The intercom crackled to life, the static a harbinger of the tidings to come. "Attention, all personnel. This is Commander Reynolds. Please report to the briefing room immediately." The technician's heart sank like a lead weight in his chest. He knew what this meant. They had all known it was a matter of when, not if. The resources were dwindling, the rations growing thinner by the day. But to hear it confirmed, to have the cold, hard truth spoken aloud, was a blow he was ill-prepared to weather. He rose from his chair, his joints protesting like rusty hinges, and shuffled down the dimly lit corridor, the echoes of his footsteps the only company in the otherwise tomb-like silence. The other inhabitants of the bunker, twelve souls in total, were already gathered, their faces etched with a mixture of fear and resignation. Commander Reynolds stood at the front, his once-proud bearing now stooped with the weight of the world upon his shoulders. He cleared his throat, his voice raspy from disuse. "I won't mince words. The latest readings... they're not good. We've been hit with salted cobalt bombs. The surface... it's uninhabitable for centuries, if not longer." A collective gasp reverberated through the room, followed by a cacophony of whispers and sobs. The technician remained stoic, his mind numb to the news. He had suspected as much, had felt it in his bones for weeks now. "We have enough supplies for six more years, at most. But that's if we ration strictly and... and..." The commander trailed off, unable to finish the sentence. The implication hung heavy in the air, suffocating in its implications. They were all aware of the math, the grim calculus of survival. Twelve mouths to feed, enough for six. The unspoken question loomed like a specter in the room: who would be the lucky ones to draw the short straw? As the others erupted into panicked whispers and heated arguments, the technician retreated into his own thoughts. He thought of his wife, his children, their faces now little more than faded memories. He wondered if they had been spared this slow, agonizing end, if they had been vaporized in the initial blast, their souls released to the ether rather than condemned to this subterranean purgatory. A hand on his shoulder jolted him from his reverie. He turned to find Dr. Thompson, the bunker's sole medical officer, her eyes brimming with unshed tears. "We need to talk," she mouthed, her voice barely audible over the din. He nodded, a sinking feeling settling in his gut. He knew what she was going to say, had suspected it since the moment the news broke. They had discussed it in hushed whispers late at night, when they thought the others couldn't hear. Mercy, they had called it, a euphemism for the unthinkable. As they slipped away from the chaos, the technician couldn't help but wonder if this was truly the end of the world, or merely the beginning of a new, far darker chapter. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a historical mystery novel in the style of Arturo Pérez-Reverte and Umberto Eco. The story starts off in California during the 1948 Gold Rush, before moving to London in the mid 1970s. There should be a mysterious 'occult' item that connects these two time periods. ### ASSISTANT Chapter One: The Golden Veil The year was 1848, and the air in California was thick with the feverish scent of gold. Men from all corners of the world had converged upon the once-tranquil landscape, their eyes glazed with the insatiable lust for fortune. Amongst these ravenous prospectors, there was one who sought not the glittering metal, but rather, a treasure far more elusive and enigmatic. His name was Alaric Blackwood, a man of refined taste and enigmatic origins. He had arrived in San Francisco aboard the clipper ship 'The Seraphim's Song', his pockets lined with letters of recommendation from the most esteemed scholars of Europe. Armed with an unyielding determination and a keen intellect, he ventured into the untamed wilderness, guided by whispers of an ancient artifact rumored to possess otherworldly powers. The artifact in question was known as 'The Oculus Aureus', or 'The Golden Eye', a relic said to have been forged by the hands of the Aztec god Quetzalcoatl himself. Legend spoke of its ability to peer through the veil of time and space, granting its possessor unimaginable knowledge and power. It was this very object that had drawn Alaric to the New World, leaving behind the hallowed halls of Oxford and the comforts of his aristocratic upbringing. As he traversed the treacherous terrain, Alaric encountered a motley crew of misfits and fortune-seekers, each with their own tales of woe and wonder. Among them was a grizzled prospector named Silas, who claimed to have stumbled upon a cryptic map hidden within the pages of an ancient Spanish manuscript. The map, he insisted, would lead them to the very heart of the Sierra Nevada, where the Oculus Aureus lay hidden, guarded by the vengeful spirits of those who had dared to covet it before. Intrigued by the old man's tale, Alaric agreed to join forces with Silas and his band of unlikely allies. Together, they embarked on a perilous journey fraught with danger and deceit, their every step dogged by a shadowy cabal known only as 'The Order of the Serpent's Coil'. This clandestine society, it seemed, would stop at nothing to claim the Oculus for their own nefarious ends. Fast forward to London, 1975. The cobblestone streets of Covent Garden were alive with the sights and sounds of a city steeped in history and mystery. Amidst the bustling throng of tourists and street performers, a figure in a worn tweed coat and fedora hat moved with a stealthy grace, his keen eyes scanning the rows of antiquities that lined the market stalls. This was Archibald 'Archie' Fletcher, a seasoned private investigator with a penchant for the arcane and the unexplained. A former MI6 operative turned reluctant treasure hunter, he had made a name for himself as the man to call when all other avenues had been exhausted. It was on this particular autumnal day that Archie's life would take an unexpected turn. A cryptic message, delivered by a trembling messenger boy, led him to a dimly-lit pawnshop tucked away in a narrow alleyway. Inside, he found himself face-to-face with an object that would forever change the course of his existence: a tarnished gold amulet, bearing the unmistakable emblem of a coiled serpent devouring its own tail. The shopkeeper, a wizened old man with a twinkle in his eye, recognized the symbol at once. "Ah, Mr. Fletcher," he cackled, "I've been expecting you. This here trinket, it's been waiting for someone like you for a very long time." With a trembling hand, Archie reached out to take the amulet, a shiver of recognition coursing through his veins. He knew, deep within his bones, that he had stumbled upon a piece of the very puzzle that had eluded his predecessors for centuries. And so began the race against time, as Archie delved headfirst into a labyrinthine web of secrets and lies, spanning continents and centuries. From the dusty archives of the British Museum to the catacombs beneath the Vatican, he would uncover the truth about the Oculus Aureus and the dark forces that sought to harness its power. But little did he know, the key to unlocking the artifact's true potential lay hidden in the journals of a long-dead adventurer, whose footsteps he was destined to retrace. And as the threads of past and present began to intertwine, Archie would find himself drawn inexorably towards a confrontation with an enemy far more terrifying than any he could have ever imagined. For the Order of the Serpent's Coil had not been vanquished, and they would stop at nothing to reclaim what they believed was rightfully theirs. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty historical novel set during the battle of the Bulge. Write in first person. The POV character for Chapter 1 is a young German officer trying to carry his wounded commander back to the deep defensive line after the bunker he was in was overrun. He should have a flashback to the moment it was overrun during the middle of the story. The story should not end well for either of them. ### ASSISTANT Chapter 1: The Frozen Hell The biting cold of the Ardennes Forest gnawed at my bones, each step a Herculean effort through the knee-deep snow. My breath came in ragged gasps, fogging up my frostbitten visor as I stumbled forward, my arms trembling under the weight of my fallen commander. The once pristine white landscape was now a grotesque tapestry of crimson and charcoal, punctuated by the twisted, lifeless forms of comrades and foes alike. The acrid stench of gunpowder and decaying flesh hung heavy in the air, a constant reminder of the carnage that had befallen us. "Kommissar, wir müssen sie zurücklassen!" a voice croaked from behind me, desperation etched into its trembling timbre. "Nein!" I growled through clenetted teeth, my jaw aching from the effort. "Wir schaffen das!" We can do this. It was a lie, one I told myself as much as I did to the terrified young soldier who had become my makeshift stretcher-bearer. Leaving Oberleutnant Schmidt behind was not an option. He was our last semblance of hope, our fragile lifeline to the Fatherland. A deafening explosion rocked the earth beneath us, sending us sprawling into the frigid embrace of the snow. I felt a searing pain in my left leg, followed by a warm, wet sensation trickling down my calf. Ignoring the agony, I clawed my way back to my feet, my numb fingers fumbling for my Kar98k. The world around me swirled, a disorienting dance of white and grey. "Sind Sie in Ordnung, Leutnant?" the voice asked, concern lacing its tremors. "Ja, ja," I lied, gritting my teeth as I hauled myself upright, my vision slowly clearing. "Lass uns gehen." Let's go. We resumed our arduous trek, the distant rumble of artillery and the shrill whistle of incoming shells our macabre symphony. The trees around us, once majestic sentinels, now stood as skeletal specters, their limbs severed and twisted by the relentless barrage. The once-dense foliage offered little cover, leaving us exposed to the merciless onslaught from above. As we staggered on, my mind drifted back to the moment it all went wrong. The bunker had been our fortress, a concrete bastion against the encroaching enemy. The air inside was thick with the stench of sweat, fear, and cordite, but it was a welcome reprieve from the frozen hell outside. The men huddled together, their eyes glassy with exhaustion but burning with determination. Oberleutnant Schmidt paced the cramped quarters, barking orders into the static-filled field radio, his voice a beacon of resolve in the chaos. Then, without warning, the ground shook violently, and the world above us came crashing down. The deafening roar of the explosion was followed by an oppressive silence, punctuated only by the cries of the wounded and the groans of the dying. I awoke to darkness, my ears ringing and my head throbbing. Disoriented, I fumbled for my flashlight, its feeble beam illuminating a scene straight from the depths of Hades. The once-impenetrable walls of our sanctuary lay in ruins, the twisted metal and concrete tombs of my comrades intermingled with the earth. "Schmidt!" I croaked, my voice hoarse from the choking dust. A weak groan answered my call, guiding me through the debris like a siren's song. I dug frantically, my hands raw and bleeding, until I found him, pinned beneath a fallen beam. His uniform was soaked in crimson, his breathing shallow and labored. "Leutnant... wir... müssen..." he gasped, his eyes glassing over. "Nein, Oberleutnant, wir schaffen das!" I shouted, adrenaline coursing through my veins. With a final, Herculean effort, I freed him from his earthen prison, hoisting him onto my shoulders. "Wir kommen zurück." We're going back. The present snapped back into focus as a searing pain ripped through my chest, followed by an icy numbness that spread like a malevolent tendril. I stumbled, my vision blurring as I looked down to find a gaping hole where my heart should have been. A crimson bloom blossomed on my grey greatcoat, staining the pristine snow beneath me. "Nein... nein, nicht jetzt..." I whispered, my legs giving out beneath me. As I collapsed, the world faded to black, the cacophony of war replaced by an eerie silence. In my final moments, I saw them. The ghostly silhouettes of my fallen comrades, their spectral forms beckoning me to join them in the eternal embrace of Valhalla. "Wir haben es versucht, mein Führer," I whispered, my voice barely a whisper. We tried. The last thing I felt was the cold, unforgiving embrace of the snow, as the darkness claimed me forever. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me a short story about the fictional character of Anton Chigurh. It should be written in third person personal and feature internal monologues of his deranged world view. The time period is around 2 years before the events in Cormac McCarthy's 2005 novel 'No Country for Old Men'. It should be set in Mexico, be very dark, show Anton's still developing obsession with "chance" and feature a grenade. The story should start with him waiting in a bar for instructions on the "job" he is to perform. ### ASSISTANT Anton Chigurh sat motionless in the dimly lit corner of the dingy Mexican cantina, his eyes scanning the room with the cold, reptilian gaze of a predator. The air was thick with the mingling scents of stale cigarette smoke, cheap tequila, and sweat-soaked leather. He was a man out of place, his clean-shaven head and pressed black suit a stark contrast to the sea of unkempt beards and tattered denim that surrounded him. Yet, not a single patron dared to meet his gaze, as if they could sense the malevolence that radiated from his very being. He fingered the worn coin in his pocket, the smooth surface a comforting reminder of the order he sought to impose on the chaotic world around him. Chance, the great equalizer, the only force he truly believed in. It was the coin that would decide the fate of the man he was here to meet, just as it had decided the fates of so many others before him. As the minutes ticked by, the tension in the room grew palpable, like a coiled rattlesnake poised to strike. Anton welcomed it, feeding off the fear that hung heavy in the air like a suffocating blanket. It was in these moments of anticipation that he felt most alive, the thrill of the unknown coursing through his veins like a potent drug. Finally, the door creaked open, and a portly man with greasy hair and a gold tooth sauntered in, his eyes darting nervously from one face to the next. Anton's gaze locked onto him like a laser, and he knew without a doubt that this was his mark. The man made his way to the bar, his hands trembling as he ordered a shot of tequila. Anton remained still, his patience unwavering as a statue carved from ice. He would wait, he would watch, and when the time was right, he would strike. As the minutes dragged on, the man's nerves began to fray, his Adam's apple bobbing like a fish out of water with each gulp of the fiery liquid. Anton could almost taste the fear emanating from him, a bitter yet intoxicating aroma that only served to heighten his own anticipation. At last, the bartender slid a folded piece of paper across the sticky countertop, and the man snatched it up like a drowning man grasping for a lifeline. Anton's moment had arrived. He rose from his seat, his movements fluid and graceful, like a panther stalking its prey. The room seemed to slow to a crawl as he approached the bar, the clinking of glasses and mumbled conversations fading into a distant hum. The man's eyes widened in terror as he felt the cold steel of Anton's silenced pistol press against the base of his skull. "Toss me the keys," Anton growled in a voice that was little more than a whisper. The man complied without hesitation, his hands shaking so violently that the keys clattered to the floor. Anton stooped down to retrieve them, never once breaking eye contact. "Now, let's take a little walk," he said, his tone deceptively calm. The man stumbled forward, his legs threatening to buckle beneath him. Anton followed closely behind, the barrel of the gun never wavering from the base of his skull. They made their way through the crowded cantina, the patrons parting like the Red Sea before them, their eyes fixed on the floor in a desperate attempt to avoid drawing attention to themselves. Once they were outside, Anton shoved the man towards a battered pickup truck parked in the alleyway. He opened the passenger door and gestured for the man to get in, his movements precise and efficient. The man complied, his breath coming in ragged gasps. As they drove through the winding backstreets of the Mexican border town, Anton's thoughts turned to the task at hand. The job was simple enough: retrieve a shipment of heroin that had gone missing, and eliminate anyone who had dared to cross his employer's path. But it was the method of execution that intrigued him most. He reached into his coat pocket and retrieved a worn leather pouch, the weight of its contents a familiar comfort in his hand. Inside lay a shiny silver grenade, its pin still intact. It was a new addition to his arsenal, a gift from a former associate who had failed to appreciate the beauty of chance. Anton had yet to test it out, but he had a feeling that tonight would be the perfect opportunity. As they neared their destination, a decrepit warehouse on the outskirts of town, Anton's pulse quickened with anticipation. He could feel the adrenaline coursing through his veins, the thrill of the hunt reaching a fever pitch. He pulled the truck to a stop a few blocks away and motioned for the man to get out. They approached the warehouse on foot, the sound of their footsteps echoing off the empty buildings like gunshots in the still night air. As they neared the entrance, Anton could hear the faint sounds of laughter and music drifting from within. He smiled, his thin lips stretching into a chilling grin. It seemed his prey was enjoying their final moments, blissfully unaware of the fate that awaited them. He signaled for the man to open the door, and they slipped inside, the stench of rotting produce and stale cigarette smoke assaulting their senses. Anton's eyes adjusted to the dim lighting, and he took in the scene before him: a dozen or so men, heavily armed and surrounded by stacks of neatly packaged heroin. He reached into his pocket and flipped the coin, the sound of it clinking against the metal of his gun's chamber the only sound in the otherwise deafening silence. Heads snapped in their direction, but it was too late. In a blur of motion, Anton had the grenade in his hand, the pin pulled and the lever depressed. He tossed it into the center of the room, the men's faces contorting in terror as they realized their fate. The explosion was deafening, the force of it knocking him back against the wall. When the smoke cleared, the warehouse was reduced to a smoldering ruin, the screams of the dying men a symphony of agony that filled his ears. He walked through the carnage, his boots crunching on shattered glass and charred debris. The man he had brought with him lay in a pool of his own blood, a look of pure terror etched onto his face. Anton knelt down beside him, his expression unreadable. "Tell me," he said, his voice barely audible over the ringing in his ears, "do you still believe in chance?" The man's eyes flickered, and a bubbling sound escaped his lips, the last vestiges of life slipping away. Anton chuckled to himself, the sound devoid of any trace of humor. "I didn't think so," he muttered, before standing up and disappearing into the night, the taste of chaos and destruction still lingering on his lips. </details> Big thanks to @chargoddard for creating [Mergekit](https://github.com/arcee-ai/mergekit)!
iloncka/data_simple_orig_obj-clust_spl-subs_1_v_4_xresnet50_ep_20
iloncka
2024-05-15T22:04:14Z
0
0
fastai
[ "fastai", "region:us" ]
null
2024-05-15T21:48:42Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
xpslh/test
xpslh
2024-05-15T22:03:33Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-15T22:03:08Z
--- license: apache-2.0 ---
emilykang/Phi_medner-neurology
emilykang
2024-05-15T22:00:25Z
148
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-15T21:46:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fine-tuned/jina-embeddings-v2-base-en-5162024-o9um-webapp
fine-tuned
2024-05-15T21:55:22Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "Note-taking", "Organization", "Memory", "AI", "Virtual assistant", "custom_code", "en", "dataset:fine-tuned/jina-embeddings-v2-base-en-5162024-o9um-webapp", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-15T21:55:07Z
--- license: apache-2.0 datasets: - fine-tuned/jina-embeddings-v2-base-en-5162024-o9um-webapp - allenai/c4 language: - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - Note-taking - Organization - Memory - AI - Virtual assistant --- This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: Note-taking app features search ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/jina-embeddings-v2-base-en-5162024-o9um-webapp', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
BANA577/Llama3-Michael-5
BANA577
2024-05-15T21:54:15Z
2
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "autotrain", "text-generation-inference", "peft", "conversational", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-15T21:48:18Z
--- tags: - autotrain - text-generation-inference - text-generation - peft library_name: transformers widget: - messages: - role: user content: What is your favorite condiment? license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
BANA577/Mistral-Koichi1
BANA577
2024-05-15T21:53:36Z
2
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "autotrain", "text-generation-inference", "peft", "conversational", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-15T21:45:57Z
--- tags: - autotrain - text-generation-inference - text-generation - peft library_name: transformers widget: - messages: - role: user content: What is your favorite condiment? license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
chinh102/chinh1002
chinh102
2024-05-15T21:52:50Z
135
0
transformers
[ "transformers", "safetensors", "deta", "object-detection", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
object-detection
2024-05-15T21:52:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ClaudioItaly/mergekit-slerp-ebgdloh-Q4_K_M-GGUF
ClaudioItaly
2024-05-15T21:50:19Z
1
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B", "base_model:merge:SanjiWatsuki/Kunoichi-DPO-v2-7B", "base_model:fblgit/UNA-TheBeagle-7b-v1", "base_model:merge:fblgit/UNA-TheBeagle-7b-v1", "endpoints_compatible", "region:us" ]
null
2024-05-15T21:50:08Z
--- library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo base_model: - fblgit/UNA-TheBeagle-7b-v1 - SanjiWatsuki/Kunoichi-DPO-v2-7B --- # ClaudioItaly/mergekit-slerp-ebgdloh-Q4_K_M-GGUF This model was converted to GGUF format from [`mergekit-community/mergekit-slerp-ebgdloh`](https://huggingface.co/mergekit-community/mergekit-slerp-ebgdloh) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/mergekit-community/mergekit-slerp-ebgdloh) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo ClaudioItaly/mergekit-slerp-ebgdloh-Q4_K_M-GGUF --model mergekit-slerp-ebgdloh.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo ClaudioItaly/mergekit-slerp-ebgdloh-Q4_K_M-GGUF --model mergekit-slerp-ebgdloh.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mergekit-slerp-ebgdloh.Q4_K_M.gguf -n 128 ```
RichardErkhov/NeuralNovel_-_Panda-7B-v0.1-8bits
RichardErkhov
2024-05-15T21:49:43Z
76
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-15T21:43:51Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Panda-7B-v0.1 - bnb 8bits - Model creator: https://huggingface.co/NeuralNovel/ - Original model: https://huggingface.co/NeuralNovel/Panda-7B-v0.1/ Original model description: --- license: apache-2.0 base_model: mistralai/Mistral-7B-Instruct-v0.2 datasets: - NeuralNovel/Creative-Logic-v1 - NeuralNovel/Neural-Story-v1 library_name: transformers inference: false --- ![Neural-Story](https://i.ibb.co/TYvZhws/Panda7b.png) # NeuralNovel/Panda-7B-v0.1 The **Panda-7B-v0.1** model by NeuralNovel. This fine-tune has been designed to provide detailed, creative and logical responses in the context of diverse narratives. Optimised for creative writing, roleplay and logical problem solving. Full-parameter fine-tune (FFT) of Mistral-7B-Instruct-v0.2. Apache-2.0 license, suitable for commercial or non-commercial use. <a href='https://ko-fi.com/S6S2UH2TC' target='_blank'><img height='38' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a> <a href='https://discord.gg/KFS229xD' target='_blank'><img width='140' height='500' style='border:0px;height:36px;' src='https://i.ibb.co/tqwznYM/Discord-button.png' border='0' alt='Join Our Discord!' /></a> ### Data-set The model on finetuned using the Panda-v1 dataset. ### Summary Fine-tuned with the intention to generate instructive and narrative text, with a specific focus on combining the elements of versatility, character engagement and nuanced writing capability. #### Out-of-Scope Use The model may not perform well in scenarios unrelated to instructive and narrative text generation. Misuse or applications outside its designed scope may result in suboptimal outcomes. ### Bias, Risks, and Limitations The model may exhibit biases or limitations inherent in the training data. It is essential to consider these factors when deploying the model to avoid unintended consequences. Users are advised to exercise caution, as there might be some inherent genre or writing bias. ### Hardware and Training ``` n_epochs = 3, n_checkpoints = 3, batch_size = 12, learning_rate = 1e-5, ``` *Sincere appreciation to Techmind for their generous sponsorship.*
Litzy619/G0515HMA21H
Litzy619
2024-05-15T21:48:53Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:google/gemma-2b", "base_model:finetune:google/gemma-2b", "license:gemma", "region:us" ]
null
2024-05-15T20:59:44Z
--- license: gemma base_model: google/gemma-2b tags: - generated_from_trainer model-index: - name: G0515HMA21H results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # G0515HMA21H This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1322 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 80 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.1841 | 0.09 | 10 | 2.8751 | | 2.5107 | 0.18 | 20 | 1.9774 | | 1.4768 | 0.27 | 30 | 0.8765 | | 0.4892 | 0.36 | 40 | 0.2067 | | 0.1718 | 0.45 | 50 | 0.1561 | | 0.1522 | 0.54 | 60 | 0.1523 | | 0.1497 | 0.63 | 70 | 0.1483 | | 0.15 | 0.73 | 80 | 0.1496 | | 0.1434 | 0.82 | 90 | 0.1502 | | 0.1464 | 0.91 | 100 | 0.1485 | | 0.1496 | 1.0 | 110 | 0.1545 | | 0.1448 | 1.09 | 120 | 0.1500 | | 0.1452 | 1.18 | 130 | 0.1482 | | 0.1457 | 1.27 | 140 | 0.1470 | | 0.1489 | 1.36 | 150 | 0.1466 | | 0.1425 | 1.45 | 160 | 0.1505 | | 0.1444 | 1.54 | 170 | 0.1477 | | 0.1463 | 1.63 | 180 | 0.1462 | | 0.1459 | 1.72 | 190 | 0.1485 | | 0.1459 | 1.81 | 200 | 0.1476 | | 0.1475 | 1.9 | 210 | 0.1470 | | 0.1469 | 1.99 | 220 | 0.1461 | | 0.1429 | 2.08 | 230 | 0.1453 | | 0.1386 | 2.18 | 240 | 0.1431 | | 0.14 | 2.27 | 250 | 0.1416 | | 0.1406 | 2.36 | 260 | 0.1402 | | 0.1379 | 2.45 | 270 | 0.1397 | | 0.1349 | 2.54 | 280 | 0.1366 | | 0.1358 | 2.63 | 290 | 0.1348 | | 0.1319 | 2.72 | 300 | 0.1333 | | 0.1337 | 2.81 | 310 | 0.1322 | | 0.1322 | 2.9 | 320 | 0.1322 | | 0.1335 | 2.99 | 330 | 0.1322 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.0
koq1231/bert-finetuned-2
koq1231
2024-05-15T21:48:23Z
106
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-15T21:47:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mergekit-community/mergekit-slerp-ebgdloh
mergekit-community
2024-05-15T21:48:12Z
8
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B", "base_model:merge:SanjiWatsuki/Kunoichi-DPO-v2-7B", "base_model:fblgit/UNA-TheBeagle-7b-v1", "base_model:merge:fblgit/UNA-TheBeagle-7b-v1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-15T21:40:36Z
--- base_model: - fblgit/UNA-TheBeagle-7b-v1 - SanjiWatsuki/Kunoichi-DPO-v2-7B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1) * [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: fblgit/UNA-TheBeagle-7b-v1 - model: SanjiWatsuki/Kunoichi-DPO-v2-7B merge_method: slerp base_model: fblgit/UNA-TheBeagle-7b-v1 dtype: bfloat16 parameters: t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers ```
emilykang/Phi_medner-neurology_lora
emilykang
2024-05-15T21:46:42Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-05-15T21:30:11Z
--- license: mit library_name: peft tags: - trl - sft - generated_from_trainer base_model: microsoft/phi-2 datasets: - generator model-index: - name: Phi_medner-neurology_lora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Phi_medner-neurology_lora This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 10 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu118 - Datasets 2.19.0 - Tokenizers 0.19.1
stevhliu/lightning
stevhliu
2024-05-15T21:45:29Z
7
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-05-15T21:36:57Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
franciscobdl/EstigiaxTinyLlama
franciscobdl
2024-05-15T21:45:06Z
148
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-29T23:14:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details Usa el modelo chat 1.1 de TinyLlama. No tiene campo system en la template ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bghira/terminus-xl-gamma-v2
bghira
2024-05-15T21:37:09Z
5
6
diffusers
[ "diffusers", "safetensors", "license:openrail++", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2023-12-10T12:15:06Z
--- license: openrail++ --- # Terminus XL Gamma (v2 preview) ## Model Details ### Model Description Terminus XL Gamma is a new state-of-the-art latent diffusion model that uses zero-terminal SNR noise schedule and velocity prediction objective at training and inference time. Terminus is based on a similar architecture to SDXL, and has the same layout. It has been trained on fewer steps with very high quality data captions via COCO and Midjourney. This model will not be capable of as many concepts as SDXL, and some subjects will simply look very bad. The objective of this model was to use v-prediction and min-SNR gamma loss to efficiently train a full zero-terminal SNR model on a single A100-80G. - **Fine-tuned from:** ptx0/terminus-xl-gamma-v1 - **Developed by:** pseudoterminal X (@bghira) - **Funded by:** pseudoterminal X (@bghira) - **Model type:** Latent Diffusion - **License:** openrail++ - **Architecture:** SDXL ### Model Sources - **Repository:** https://github.com/bghira/SimpleTuner ## Uses ### Direct Use Terminus XL Gamma can be used for generating high-quality images given text prompts. It should particularly excel at inpainting tasks, where a zero-terminal SNR noise schedule allows it to more effectively retain contrast. The model can be utilized in creative industries such as art, advertising, and entertainment to create visually appealing content. ### Downstream Use Terminus XL Gamma can be fine-tuned for specific tasks such as image super-resolution, style transfer, and more. ### Out-of-Scope Use The model is not designed for tasks outside of image generation. It should not be used to produce harmful content, or deceive others. Please use common sense. ## Bias, Risks, and Limitations The model might exhibit biases present in the training data. The generated images should be carefully reviewed to ensure they meet ethical and societal standards. ### Recommendations Users should be cautious of potential biases in the generated images and thoroughly review them before use. ## Training Details ### Training Data This model's success largely depended on a somewhat small collection of very high quality data samples. * LAION-HD, filtered down to EXIF samples without watermarks. Luminance value of samples capped to 100 (.5). * Midjourney 5.2 dataset `ptx0/mj-general` with zero filtration. ### Training Procedure #### Preprocessing Most of the existing process for terminus-xl-gamma-v1 was followed, with the exception of training extensively on cropped images using SDXL's crop coordinates to improve fine details. No images were upsampled during this training session. Images were downsampled using LANCZOS instead of BICUBIC filters to attain higher image fidelity and maintain more image context for the model to learn from. Only high-quality photos were used in this training session, greatly improving the realism qualities. ~770,000 images were used for this training run. #### Training Hyperparameters - **Training regime:** bf16 mixed precision - **Learning rate:** \(4 \times 10^{-7}\) to \(8 \times 10^{-7}\), cosine schedule - **Epochs:** 60 - **Batch size:** 24 * 15 = 360 #### Speeds, Sizes, Times [More Information Needed] ## Evaluation ### Testing Data, Factors & Metrics [More Information Needed] ### Results [More Information Needed] #### Summary [More Information Needed] ## Environmental Impact - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications ### Model Architecture and Objective The model uses an SDXL-compatible latent diffusion architecture with a unique min-SNR augmented velocity objective. ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [More Information Needed] ## More Information [More Information Needed] ## Model Card Authors [More Information Needed] ## Model Card Contact [More Information Needed]
iloncka/data_simple_orig_obj-clust_spl-subs_1_v_3_xresnet50_ep_20
iloncka
2024-05-15T21:35:58Z
0
0
fastai
[ "fastai", "region:us" ]
null
2024-05-15T18:45:31Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
bghira/terminus-xl-otaku-v1
bghira
2024-05-15T21:35:46Z
345
6
diffusers
[ "diffusers", "safetensors", "license:openrail++", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2023-12-24T15:55:58Z
--- license: openrail++ --- # Terminus XL Otaku (v1 preview) ## Model Details ### Model Description Terminus XL Otaku is a latent diffusion model that uses zero-terminal SNR noise schedule and velocity prediction objective at training and inference time. Terminus is a new state-of-the-art model family based on SDXL's architecture, and is compatible with (most) SDXL pipelines. For Terminus Otaku (this model), the training data is exclusively anime/celshading/3D renders and other hand-drawn or synthetic art styles. The objective of this model was to continue the use of v-prediction objective and min-SNR gamma loss to adapt Terminus Gamma v2's outputs to a more artistic style. - **Fine-tuned from:** ptx0/terminus-xl-gamma-v2 - **Developed by:** pseudoterminal X (@bghira) - **Funded by:** pseudoterminal X (@bghira) - **Model type:** Latent Diffusion - **License:** openrail++ - **Architecture:** SDXL ### Model Sources - **Repository:** https://github.com/bghira/SimpleTuner ## Uses ### Direct Use Terminus XL Otaku can be used for generating high-quality images given text prompts. It should particularly excel at inpainting tasks for animated subject matter, where a zero-terminal SNR noise schedule allows it to more effectively retain contrast. The model can be utilized in creative industries such as art, advertising, and entertainment to create visually appealing content. ### Downstream Use Terminus XL Otaku can be fine-tuned for specific tasks such as image super-resolution, style transfer, and more. However, it's recommended that the v1 preview not be used for fine-tuning until it is fully released, as any structural issues will hopefully be resolved by then. ### Out-of-Scope Use The model is not designed for tasks outside of image generation. It should not be used to produce harmful content, or deceive others. Please use common sense. ## Bias, Risks, and Limitations The model might exhibit biases present in the training data. The generated images should be carefully reviewed to ensure they meet ethical and societal standards. ### Recommendations Users should be cautious of potential biases in the generated images and thoroughly review them before use. ## Training Details ### Training Data This model's success largely depended on a somewhat small collection of very high quality data samples. * Indiscriminate use of NijiJourney outputs. * Midjourney 5.2 outputs that mention anime styles in their tags. * Niji and MJ Showcase images that were re-captioned using CogVLM. * Anchor data of real human subjects in a small (10%) ratio to the animated material, to retain coherence. ### Training Procedure #### Preprocessing This model is (so far) trained exclusively on cropped images using SDXL's crop coordinates to improve fine details. No images were upsampled or downsampled during this training session. Instead, random crops (or unaltered 1024px square images) were used in lieu. ~50,000 images were used for this training run with continuous collection throughout the process, making it difficult to ascertain how many exact images were used. #### Training Hyperparameters - **Training regime:** bf16 mixed precision - **Learning rate:** \(1 \times 10^{-7}\) to \(1 \times 10^{-6}\), cosine schedule - **Epochs:** 11 - **Batch size:** 12 * 8 = 96 #### Speeds, Sizes, Times [More Information Needed] ## Evaluation ### Testing Data, Factors & Metrics [More Information Needed] ### Results [More Information Needed] #### Summary [More Information Needed] ## Environmental Impact - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications ### Model Architecture and Objective The model uses an SDXL-compatible latent diffusion architecture with a unique min-SNR augmented velocity objective. ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [More Information Needed] ## More Information [More Information Needed] ## Model Card Authors [More Information Needed] ## Model Card Contact [More Information Needed]
bghira/terminus-xl-gamma-v1
bghira
2024-05-15T21:34:03Z
566
19
diffusers
[ "diffusers", "safetensors", "license:openrail++", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2023-11-03T01:52:16Z
--- license: openrail++ --- # Terminus XL Gamma ## Model Details ### Model Description Terminus XL Gamma is a new state-of-the-art latent diffusion model that uses zero-terminal SNR noise schedule and velocity prediction objective at training and inference time. Terminus is based on the same architecture as SDXL, and has the same layout. It has been trained on fewer steps with very high quality data captions via COCO and Midjourney. This model will not be capable of as many concepts as SDXL, and some subjects will simply look very bad. The objective of this model was to use min-SNR gamma loss to efficiently train a full model on a single A100-80G. - **Developed by:** pseudoterminal X (@bghira) - **Funded by:** pseudoterminal X (@bghira) - **Model type:** Latent Diffusion - **License:** openrail++ - **Architecture:** SDXL ### Model Sources - **Repository:** https://github.com/bghira/SimpleTuner ## Uses ### Direct Use Terminus XL Gamma can be used for generating high-quality images given text prompts. It should particularly excel at inpainting tasks, where a zero-terminal SNR noise schedule allows it to more effectively retain contrast. The model can be utilized in creative industries such as art, advertising, and entertainment to create visually appealing content. ### Downstream Use Terminus XL Gamma can be fine-tuned for specific tasks such as image super-resolution, style transfer, and more. ### Out-of-Scope Use The model is not designed for tasks outside of image generation. It should not be used to produce harmful content, or deceive others. Please use common sense. ## Bias, Risks, and Limitations The model might exhibit biases present in the training data. The generated images should be carefully reviewed to ensure they meet ethical and societal standards. ### Recommendations Users should be cautious of potential biases in the generated images and thoroughly review them before use. ## Training Details ### Training Data This model's success largely depended on a somewhat small collection of very high quality data samples. * LAION-HD, filtered down to EXIF samples without watermarks. Luminance value of samples capped to 100 (.5). * Midjourney 5.2 dataset `ptx0/mj-general` with zero filtration. ### Training Procedure #### Preprocessing Followed SDXL's pretraining procedure using crop conditional inputs and centre-cropped images with their full size as the input. Trained on 512x512, followed by 768x768, and finally, ~1 megapixel multi-aspect training for the rest of the training time. Images were downsampled while maintaining aspect ratio and cropped on 64 pixel increments. Many aspect ratios were trained, but only a few are likely to work fully. #### Training Hyperparameters - **Training regime:** bf16 mixed precision - **Learning rate:** \(4 \times 10^{-7}\) to \(8 \times 10^{-7}\), cosine schedule - **Epochs:** 60 - **Batch size:** 24 * 15 = 360 #### Speeds, Sizes, Times [More Information Needed] ## Evaluation ### Testing Data, Factors & Metrics [More Information Needed] ### Results [More Information Needed] #### Summary [More Information Needed] ## Environmental Impact - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications ### Model Architecture and Objective The model uses an SDXL-compatible latent diffusion architecture with a unique min-SNR augmented velocity objective. ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [More Information Needed] ## More Information [More Information Needed] ## Model Card Authors [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/Undi95_-_Meta-Llama-3-8B-hf-gguf
RichardErkhov
2024-05-15T21:30:10Z
23
1
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-15T19:45:12Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Meta-Llama-3-8B-hf - GGUF - Model creator: https://huggingface.co/Undi95/ - Original model: https://huggingface.co/Undi95/Meta-Llama-3-8B-hf/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Meta-Llama-3-8B-hf.Q2_K.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Meta-Llama-3-8B-hf-gguf/blob/main/Meta-Llama-3-8B-hf.Q2_K.gguf) | Q2_K | 2.96GB | | [Meta-Llama-3-8B-hf.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Meta-Llama-3-8B-hf-gguf/blob/main/Meta-Llama-3-8B-hf.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [Meta-Llama-3-8B-hf.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Meta-Llama-3-8B-hf-gguf/blob/main/Meta-Llama-3-8B-hf.IQ3_S.gguf) | IQ3_S | 3.43GB | | [Meta-Llama-3-8B-hf.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Meta-Llama-3-8B-hf-gguf/blob/main/Meta-Llama-3-8B-hf.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [Meta-Llama-3-8B-hf.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Meta-Llama-3-8B-hf-gguf/blob/main/Meta-Llama-3-8B-hf.IQ3_M.gguf) | IQ3_M | 3.52GB | | [Meta-Llama-3-8B-hf.Q3_K.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Meta-Llama-3-8B-hf-gguf/blob/main/Meta-Llama-3-8B-hf.Q3_K.gguf) | Q3_K | 3.74GB | | [Meta-Llama-3-8B-hf.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Meta-Llama-3-8B-hf-gguf/blob/main/Meta-Llama-3-8B-hf.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [Meta-Llama-3-8B-hf.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Meta-Llama-3-8B-hf-gguf/blob/main/Meta-Llama-3-8B-hf.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [Meta-Llama-3-8B-hf.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Meta-Llama-3-8B-hf-gguf/blob/main/Meta-Llama-3-8B-hf.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [Meta-Llama-3-8B-hf.Q4_0.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Meta-Llama-3-8B-hf-gguf/blob/main/Meta-Llama-3-8B-hf.Q4_0.gguf) | Q4_0 | 4.34GB | | [Meta-Llama-3-8B-hf.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Meta-Llama-3-8B-hf-gguf/blob/main/Meta-Llama-3-8B-hf.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [Meta-Llama-3-8B-hf.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Meta-Llama-3-8B-hf-gguf/blob/main/Meta-Llama-3-8B-hf.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [Meta-Llama-3-8B-hf.Q4_K.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Meta-Llama-3-8B-hf-gguf/blob/main/Meta-Llama-3-8B-hf.Q4_K.gguf) | Q4_K | 4.58GB | | [Meta-Llama-3-8B-hf.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Meta-Llama-3-8B-hf-gguf/blob/main/Meta-Llama-3-8B-hf.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [Meta-Llama-3-8B-hf.Q4_1.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Meta-Llama-3-8B-hf-gguf/blob/main/Meta-Llama-3-8B-hf.Q4_1.gguf) | Q4_1 | 4.78GB | | [Meta-Llama-3-8B-hf.Q5_0.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Meta-Llama-3-8B-hf-gguf/blob/main/Meta-Llama-3-8B-hf.Q5_0.gguf) | Q5_0 | 5.21GB | | [Meta-Llama-3-8B-hf.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Meta-Llama-3-8B-hf-gguf/blob/main/Meta-Llama-3-8B-hf.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [Meta-Llama-3-8B-hf.Q5_K.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Meta-Llama-3-8B-hf-gguf/blob/main/Meta-Llama-3-8B-hf.Q5_K.gguf) | Q5_K | 5.34GB | | [Meta-Llama-3-8B-hf.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Meta-Llama-3-8B-hf-gguf/blob/main/Meta-Llama-3-8B-hf.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [Meta-Llama-3-8B-hf.Q5_1.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Meta-Llama-3-8B-hf-gguf/blob/main/Meta-Llama-3-8B-hf.Q5_1.gguf) | Q5_1 | 5.65GB | | [Meta-Llama-3-8B-hf.Q6_K.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Meta-Llama-3-8B-hf-gguf/blob/main/Meta-Llama-3-8B-hf.Q6_K.gguf) | Q6_K | 6.14GB | | [Meta-Llama-3-8B-hf.Q8_0.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Meta-Llama-3-8B-hf-gguf/blob/main/Meta-Llama-3-8B-hf.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- language: - en pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: other license_name: llama3 license_link: LICENSE extra_gated_prompt: >- ### META LLAMA 3 COMMUNITY LICENSE AGREEMENT Meta Llama 3 Version Release Date: April 18, 2024 "Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. "Documentation" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/. "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. "Meta Llama 3" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads. "Llama Materials" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement. "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof). 2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Meta Llama 3 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy) #### Prohibited Uses We want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following: 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State 2. Guns and illegal weapons (including weapon development) 3. Illegal drugs and regulated/controlled substances 4. Operation of critical infrastructure, transportation technologies, or heavy machinery 5. Self-harm or harm to others, including suicide, cutting, and eating disorders 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following: 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 3. Generating, promoting, or further distributing spam 4. Impersonating another individual without consent, authorization, or legal right 5. Representing that the use of Meta Llama 3 or outputs are human-generated 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3) * Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback * Reporting bugs and security concerns: facebook.com/whitehat/info * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected] extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B, for use with transformers and with the original `llama3` codebase. ### Use with transformers See the snippet below for usage with Transformers: ```python >>> import transformers >>> import torch >>> model_id = "meta-llama/Meta-Llama-3-8B" >>> pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto" ) >>> pipeline("Hey how are you doing today?") ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B --include "original/*" --local-dir Meta-Llama-3-8B ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 8B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
emilykang/Phi_medner-generalmedicine
emilykang
2024-05-15T21:29:57Z
146
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-15T21:13:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hmandsager/detr-resnet-50_finetuned_cppe5
hmandsager
2024-05-15T21:27:47Z
217
0
transformers
[ "transformers", "safetensors", "detr", "object-detection", "generated_from_trainer", "base_model:facebook/detr-resnet-50", "base_model:finetune:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2024-05-13T16:53:01Z
--- license: apache-2.0 base_model: facebook/detr-resnet-50 tags: - generated_from_trainer model-index: - name: detr-resnet-50_finetuned_cppe5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.2 - Datasets 2.19.1 - Tokenizers 0.19.1
mergekit-community/Llama3-12B-wwe
mergekit-community
2024-05-15T21:24:05Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:NousResearch/Hermes-2-Pro-Llama-3-8B", "base_model:merge:NousResearch/Hermes-2-Pro-Llama-3-8B", "base_model:hfl/llama-3-chinese-8b-instruct-v2", "base_model:merge:hfl/llama-3-chinese-8b-instruct-v2", "base_model:shenzhi-wang/Llama3-8B-Chinese-Chat", "base_model:merge:shenzhi-wang/Llama3-8B-Chinese-Chat", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-15T21:18:34Z
--- base_model: - hfl/llama-3-chinese-8b-instruct-v2 - NousResearch/Hermes-2-Pro-Llama-3-8B - shenzhi-wang/Llama3-8B-Chinese-Chat library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [hfl/llama-3-chinese-8b-instruct-v2](https://huggingface.co/hfl/llama-3-chinese-8b-instruct-v2) * [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) * [shenzhi-wang/Llama3-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - layer_range: [0, 16] model: shenzhi-wang/Llama3-8B-Chinese-Chat - sources: - layer_range: [6, 24] model: hfl/llama-3-chinese-8b-instruct-v2 - sources: - layer_range: [8, 32] model: NousResearch/Hermes-2-Pro-Llama-3-8B merge_method: passthrough dtype: float16 ```
GrigoriiA/parler-tts-mini-v0.1-su-finetune2
GrigoriiA
2024-05-15T21:21:46Z
61
0
transformers
[ "transformers", "safetensors", "parler_tts", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-15T21:16:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Shaleen123/llama3-science-8bit
Shaleen123
2024-05-15T21:19:33Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-15T21:16:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mmontecino/albert-base-v2-finetuned.h5
mmontecino
2024-05-15T21:15:30Z
162
0
transformers
[ "transformers", "safetensors", "albert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-15T21:15:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]