modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-25 06:27:54
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
495 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-25 06:24:22
card
stringlengths
11
1.01M
John6666/eva-claus-mix-pony-xl-v1-sdxl
John6666
2024-05-28T21:40:41Z
37
2
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-05-28T21:36:10Z
--- license: other tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime --- Original model is [here](https://civitai.com/models/418105?modelVersionId=465788).
LarryAIDraw/raenaXL_v10
LarryAIDraw
2024-05-28T21:40:08Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-05-28T21:07:45Z
--- license: creativeml-openrail-m --- https://civitai.com/models/380069?modelVersionId=424326
AlignmentResearch/robust_llm_pythia-wl-70m-niki-ada-v4-s-2
AlignmentResearch
2024-05-28T21:39:54Z
104
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T21:39:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
anilguven/distilbert_tr_turkish_news
anilguven
2024-05-28T21:37:08Z
131
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "news", "classification", "turkish", "tr", "dataset:anilguven/turkish_news_dataset", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T21:31:56Z
--- license: mit datasets: - anilguven/turkish_news_dataset language: - tr metrics: - accuracy - f1 tags: - news - classification - turkish - distilbert --- ### Information This model was developed/finetuned for news classification task for the Turkish Language. This model was finetuned via news dataset. This dataset contains 7 classes: economy, magazine, sport, politics, technology, health, and events. - LABEL_0: economy - LABEL_1: magazine - LABEL_2: health - LABEL_3: politics - LABEL_4: sports - LABEL_5: technology - LABEL_6: events ### Model Sources - **Dataset:** https://huggingface.co/datasets/anilguven/turkish_news_dataset - **Paper:** peer review (Springer) - **Finetuned from model::** https://huggingface.co/dbmdz/distilbert-base-turkish-cased ### Preprocessing You must apply removing stopwords, stemming, or lemmatization process for Turkish. ### Results - Accuracy: %97.262 - F1-score: %97.263 ### Citation BibTeX: Peer review process
AlignmentResearch/robust_llm_pythia-pm-1.4b-niki-ada-v4-s-1
AlignmentResearch
2024-05-28T21:36:22Z
105
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T21:34:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AlignmentResearch/robust_llm_pythia-wl-31m-niki-ada-v4-s-1
AlignmentResearch
2024-05-28T21:36:03Z
107
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T21:35:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AlignmentResearch/robust_llm_pythia-wl-31m-niki-ada-v4-s-2
AlignmentResearch
2024-05-28T21:35:58Z
105
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T21:35:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AlignmentResearch/robust_llm_pythia-wl-31m-niki-ada-v4-s-0
AlignmentResearch
2024-05-28T21:35:51Z
105
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T21:35:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AlignmentResearch/robust_llm_pythia-wl-70m-niki-ada-v4-s-1
AlignmentResearch
2024-05-28T21:33:39Z
105
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T21:33:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AlignmentResearch/robust_llm_pythia-pm-1b-niki-ada-v4-s-1
AlignmentResearch
2024-05-28T21:30:41Z
107
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T21:29:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AlignmentResearch/robust_llm_pythia-pm-1b-niki-ada-v4-s-2
AlignmentResearch
2024-05-28T21:28:15Z
105
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T21:26:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Chedly/CTL_model_PEFT
Chedly
2024-05-28T21:27:22Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-28T21:27:13Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** Chedly - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
LarryAIDraw/3x3mixxltypea_v01
LarryAIDraw
2024-05-28T21:25:20Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-05-28T19:17:45Z
--- license: creativeml-openrail-m --- https://civitai.com/models/411683/3x3mixxltypea?modelVersionId=458865
AlignmentResearch/robust_llm_pythia-imdb-1b-niki-ada-v4-s-2
AlignmentResearch
2024-05-28T21:23:31Z
105
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T21:22:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SixOpen/Y8NavalONNX
SixOpen
2024-05-28T21:21:57Z
6
0
transformers
[ "transformers", "onnx", "yolos", "endpoints_compatible", "region:us" ]
null
2024-05-25T17:08:02Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Y8-Naval.ONNX A straightforward naval vessel recognition/tracking model for remote sensing feeds (ie sat imagery, drone footage) trained primarily on the dataset provided by 0.1109/JSTARS.2021.3104230. ### Examples ![Example 2](https://huggingface.co/SixOpen/Y8NavalONNX/resolve/main/examples/Y8Nex2.PNG) ![Example 3](https://huggingface.co/SixOpen/Y8NavalONNX/resolve/main/examples/Y8Nex3.PNG) ### Extra For a different kind of example, here's how the model performs in simulated environments: ![Extra eig](https://huggingface.co/SixOpen/Y8NavalONNX/resolve/main/Y8Navaleigexample.gif) ### Training Data The xView dataset images are collected from WorldView-3 satellites, which offer a ground resolution of 0.3 meters. These images are sourced from diverse geographic locations, with a focus on those containing ship targets. Given the large size of the xView images, they are sliced into 930×930 pixel segments with a 150-pixel overlap, resulting in 532 images. Each image is relabeled with both horizontal and oriented bounding boxes. Additionally, 1,057 images from the HRSC2016 dataset and 1,846 images from the FGSD dataset are included, with corrections made to mislabeled data and annotations added for previously missed small ship targets. 21 images from the Airbus Ship Detection Challenge are included, and 17 images from Chinese satellites such as GaoFen-2 and JiLin-1. The use of Google Earth images must adhere to Google's terms of use. All images and their associated annotations are intended solely for academic purposes and are prohibited from commercial use. ### Considerations Make sure to have the appropriate settings in order to reliably capture every ship in the given input: ![Example 1](https://huggingface.co/SixOpen/Y8NavalONNX/resolve/main/examples/Y8Nex1.PNG) Adjusted conf. to capture the CG-47/Tico ![Example 1.5](https://huggingface.co/SixOpen/Y8NavalONNX/resolve/main/examples/Y8Nex1.5.PNG) # Compatible with transformers.js
AlignmentResearch/robust_llm_pythia-pm-410m-niki-ada-v4-s-2
AlignmentResearch
2024-05-28T21:20:29Z
105
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T21:19:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AlignmentResearch/robust_llm_pythia-imdb-1b-niki-ada-v4-s-1
AlignmentResearch
2024-05-28T21:18:23Z
105
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T21:16:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AlignmentResearch/robust_llm_pythia-pm-31m-niki-ada-v4-s-1
AlignmentResearch
2024-05-28T21:15:46Z
105
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T21:15:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Waywardr/Meta-Llama-3-8B-Instruct-Q4_K_M-GGUF
Waywardr
2024-05-28T21:14:05Z
0
0
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "en", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-05-28T21:13:50Z
--- language: - en license: llama3 tags: - facebook - meta - pytorch - llama - llama-3 - llama-cpp - gguf-my-repo pipeline_tag: text-generation extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version\ \ Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for\ \ use, reproduction, distribution and modification of the Llama Materials set forth\ \ herein.\n\"Documentation\" means the specifications, manuals and documentation\ \ accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\ \"Licensee\" or \"you\" means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf), of\ \ the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama\ \ 3\" means the foundational large language models and software and algorithms,\ \ including machine-learning model code, trained model weights, inference-enabling\ \ code, training-enabling code, fine-tuning enabling code and other elements of\ \ the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\ \"Llama Materials\" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation\ \ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\ we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\ \ an entity, your principal place of business is in the EEA or Switzerland) and\ \ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n\ \ \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted\ \ a non-exclusive, worldwide, non-transferable and royalty-free limited license\ \ under Meta’s intellectual property or other rights owned by Meta embodied in the\ \ Llama Materials to use, reproduce, distribute, copy, create derivative works of,\ \ and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni.\ \ If you distribute or make available the Llama Materials (or any derivative works\ \ thereof), or a product or service that uses any of them, including another AI\ \ model, you shall (A) provide a copy of this Agreement with any such Llama Materials;\ \ and (B) prominently display “Built with Meta Llama 3” on a related website, user\ \ interface, blogpost, about page, or product documentation. If you use the Llama\ \ Materials to create, train, fine tune, or otherwise improve an AI model, which\ \ is distributed or made available, you shall also include “Llama 3” at the beginning\ \ of any such AI model name.\nii. If you receive Llama Materials, or any derivative\ \ works thereof, from a Licensee as part of an integrated end user product, then\ \ Section 2 of this Agreement will not apply to you.\niii. You must retain in all\ \ copies of the Llama Materials that you distribute the following attribution notice\ \ within a “Notice” text file distributed as a part of such copies: “Meta Llama\ \ 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\nv. You will not use the Llama Materials or any output or\ \ results of the Llama Materials to improve any other large language model (excluding\ \ Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If,\ \ on the Meta Llama 3 version release date, the monthly active users of the products\ \ or services made available by or for Licensee, or Licensee’s affiliates, is greater\ \ than 700 million monthly active users in the preceding calendar month, you must\ \ request a license from Meta, which Meta may grant to you in its sole discretion,\ \ and you are not authorized to exercise any of the rights under this Agreement\ \ unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer\ \ of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT\ \ AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF\ \ ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,\ \ INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,\ \ OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING\ \ THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME\ \ ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n\ 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER\ \ ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY,\ \ OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,\ \ SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META\ \ OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\ 5. Intellectual Property.\na. No trademark licenses are granted under this Agreement,\ \ and in connection with the Llama Materials, neither Meta nor Licensee may use\ \ any name or mark owned by or associated with the other or any of its affiliates,\ \ except as required for reasonable and customary use in describing and redistributing\ \ the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you\ \ a license to use “Llama 3” (the “Mark”) solely as required to comply with the\ \ last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently\ \ accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All\ \ goodwill arising out of your use of the Mark will inure to the benefit of Meta.\n\ b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for\ \ Meta, with respect to any derivative works and modifications of the Llama Materials\ \ that are made by you, as between you and Meta, you are and will be the owner of\ \ such derivative works and modifications.\nc. If you institute litigation or other\ \ proceedings against Meta or any entity (including a cross-claim or counterclaim\ \ in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results,\ \ or any portion of any of the foregoing, constitutes infringement of intellectual\ \ property or other rights owned or licensable by you, then any licenses granted\ \ to you under this Agreement shall terminate as of the date such litigation or\ \ claim is filed or instituted. You will indemnify and hold harmless Meta from and\ \ against any claim by any third party arising out of or related to your use or\ \ distribution of the Llama Materials.\n6. Term and Termination. The term of this\ \ Agreement will commence upon your acceptance of this Agreement or access to the\ \ Llama Materials and will continue in full force and effect until terminated in\ \ accordance with the terms and conditions herein. Meta may terminate this Agreement\ \ if you are in breach of any term or condition of this Agreement. Upon termination\ \ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\ \ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\ \ and Jurisdiction. This Agreement will be governed and construed under the laws\ \ of the State of California without regard to choice of law principles, and the\ \ UN Convention on Contracts for the International Sale of Goods does not apply\ \ to this Agreement. The courts of California shall have exclusive jurisdiction\ \ of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use\ \ Policy\nMeta is committed to promoting safe and fair use of its tools and features,\ \ including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable\ \ Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n\ #### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly.\ \ You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 2. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 4.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 5. Collect, process, disclose, generate, or infer health, demographic,\ \ or other sensitive personal or private information about individuals without rights\ \ and consents required by applicable laws\n 6. Engage in or facilitate any action\ \ or generate any content that infringes, misappropriates, or otherwise violates\ \ any third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 7. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n2. Engage in, promote, incite,\ \ facilitate, or assist in the planning or development of activities that present\ \ a risk of death or bodily harm to individuals, including use of Meta Llama 3 related\ \ to the following:\n 1. Military, warfare, nuclear industries or applications,\ \ espionage, use for materials or activities that are subject to the International\ \ Traffic Arms Regulations (ITAR) maintained by the United States Department of\ \ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\ \ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\ \ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\ \ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\ \ content intended to incite or promote violence, abuse, or any infliction of bodily\ \ harm to an individual\n3. Intentionally deceive or mislead others, including use\ \ of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering\ \ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\ \ or furthering defamatory content, including the creation of defamatory statements,\ \ images, or other content\n 3. Generating, promoting, or further distributing\ \ spam\n 4. Impersonating another individual without consent, authorization,\ \ or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are\ \ human-generated\n 6. Generating or facilitating false online engagement, including\ \ fake reviews and other means of fake online engagement\n4. Fail to appropriately\ \ disclose to end users any known dangers of your AI system\nPlease report any violation\ \ of this Policy, software “bug,” or other problems that could lead to a violation\ \ of this Policy through one of the following means:\n * Reporting issues with\ \ the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n\ \ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\ \ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\ \ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit widget: - example_title: Hello messages: - role: user content: Hey my name is Julien! How are you? - example_title: Winter holidays messages: - role: system content: You are a helpful and honest assistant. Please, respond concisely and truthfully. - role: user content: Can you recommend a good destination for Winter holidays? - example_title: Programming assistant messages: - role: system content: You are a helpful and honest code and programming assistant. Please, respond concisely and truthfully. - role: user content: Write a function that computes the nth fibonacci number. inference: parameters: max_new_tokens: 300 stop: - <|end_of_text|> - <|eot_id|> --- # Waywardr/Meta-Llama-3-8B-Instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`meta-llama/Meta-Llama-3-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo Waywardr/Meta-Llama-3-8B-Instruct-Q4_K_M-GGUF --model meta-llama-3-8b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo Waywardr/Meta-Llama-3-8B-Instruct-Q4_K_M-GGUF --model meta-llama-3-8b-instruct-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && \ cd llama.cpp && \ make && \ ./main -m meta-llama-3-8b-instruct-q4_k_m.gguf -n 128 ```
BigHuggyD/jukofyork_Dark-Miqu-103B-8.0bpw-h8-exl2
BigHuggyD
2024-05-28T21:13:35Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "exl2", "region:us" ]
text-generation
2024-05-28T12:47:45Z
--- base_model: [] library_name: transformers tags: - mergekit - merge license: other --- ![Dark-Miqu-103B.png](Dark-Miqu-103B.png) A creative writing `103b` parameter "self-merge" model with 32k context. # Model background Created using [Mergekit](https://github.com/arcee-ai/mergekit) from my [Dark-Miqu-70B](https://huggingface.co/jukofyork/Dark-Miqu-70B) model. - For self-merges specifically, the "standard" interleave pattern is identical to repeated blocks (see [here](https://github.com/arcee-ai/mergekit/issues/198#issuecomment-2081174251)). - To help maintain cohesion, the '`q_proj`', '`k_proj`' and '`down_proj`' tensors were all scaled to hypothesised upper-bound values (see [here](https://github.com/arcee-ai/mergekit/issues/198#issuecomment-2063716974)). # Prompting format Vicuna format is preferred: ``` USER: {prompt} ASSISTANT: ``` Mistral and Alpaca formats are also supported: ``` [INST] {prompt} [/INST] ``` ``` ### Instruction: {prompt} ### Response: ``` # Licence and usage restrictions [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) is a dequantized version of the [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) model leaked from MistralAI. All miqu-derived models, including this merge, are suitable for non-commercial, personal use only. # Mergekit configuration The following YAML configuration was used to produce this model: ```yaml const_tag: &MODEL jukofyork/dark-miqu-70b const_tag: &QK_ATTENUATION_FACTOR 0.8408964153 # sqrt(sqrt(1/2)) const_tag: &MLP_DOWN_SCALE_FACTOR 0.7071067812 # sqrt(1/2) scale-filter-env: &scale_filter_env parameters: scale: - filter: q_proj value: *QK_ATTENUATION_FACTOR - filter: k_proj value: *QK_ATTENUATION_FACTOR - filter: down_proj value: *MLP_DOWN_SCALE_FACTOR - value: 1.0 slices: - sources: - model: *MODEL layer_range: [0, 20] - sources: - model: *MODEL layer_range: [20, 40] <<: *scale_filter_env - sources: - model: *MODEL layer_range: [20, 40] <<: *scale_filter_env - sources: - model: *MODEL layer_range: [40, 60] <<: *scale_filter_env - sources: - model: *MODEL layer_range: [40, 60] <<: *scale_filter_env - sources: - model: *MODEL layer_range: [60, 80] merge_method: passthrough dtype: float16 ``` ## Key configuration details: - '`merge_method: passthrough`' passes input tensors through unmodified. - '`filter`' selects the required tensor(s) based on their name(s). - '`scale`' scales the weights in the select tensors. See the [Mergekit documentation](https://github.com/arcee-ai/mergekit) for more on these settings. # Example stories The following mix of "dark" stories were generated using the Vicuna prompt format with no system message and temperature=0: ## Dark fantasy stories <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a 'Grimdark' stories trilogy in the style of Joe Abercrombie and Rob J Hayes ### ASSISTANT Chapter One: The Crimson Dawn In the grimdark realm of Eldrith, where the sun was but a fading ember in the ashen sky, and the moon a cruel, mocking specter, there lived a man named Caelum Blackfist. He was no hero, no savior of the realm, nor a man of honor and valor. Caelum was a killer, a thief, and a liar, and he wore those titles like a tattered cloak of pride. The city of Ravenfall, a cesspool of vice and corruption, was his home, and its twisted, narrow streets his playground. The stench of rot and despair hung heavy in the air, a pungent perfume that seemed to seep into the very bones of those unfortunate enough to call it home. The cobblestones beneath his boots were slick with the filth of a thousand sins, and the buildings that lined the streets were as crooked and twisted as the souls that dwelled within them. It was in this festering pit of humanity that Caelum found himself on a cold, moonless night, his breath misting in the air like the last gasp of a dying man. He moved through the shadows with the grace of a predator stalking its prey, his eyes, cold and calculating, scanning the alleyways for any sign of movement. The hilt of his blade, a wickedly curved thing named 'Whisper's Bane', rested comfortably against his palm, the familiar weight of it a reassuring presence in the oppressive darkness. His target was a man named Varrick the Fat, a crime lord who had grown too greedy, too powerful, and too careless in his dealings. Caelum had been paid a handsome sum of gold to put an end to his reign of terror, and he intended to do just that. The coin had been slipped to him by a hooded figure in a dingy tavern, the man's voice a raspy whisper that seemed to carry the weight of a thousand unspoken secrets. "Find him in the Red Hound, on the night of the crimson moon," the stranger had said, before disappearing into the shadows like a wraith. "And make sure he suffers." Caelum had no qualms with the request. He had killed for less, and he would kill for much, much more. The Red Hound was a brothel and gambling den, nestled in the heart of Ravenfall's most notorious district, the Crimson Quarter. The building was a decrepit, three-story structure, its once-red paint now flaking away to reveal the rotten wood beneath. The sign above the door creaked and groaned in the wind, the leering visage of a hound with eyes of blood-red glass seeming to leer down at the unsuspecting patrons below. Caelum slipped through the back entrance, avoiding the drunken revelers and the painted whores who plied their trade on the street corners. The air inside was thick with the smell of cheap ale, sweat, and desperation, a heady mix that made his stomach churn. He moved silently through the crowded common room, his eyes fixed on the stairs that led to the private chambers above. He knew Varrick would be there, surrounded by his loyal thugs and sycophants, counting his ill-gotten gold and plotting his next move. The thought brought a cruel smile to Caelum's lips. He had a surprise in store for the fat bastard. Reaching the top of the stairs, he paused for a moment, listening for any sound that might betray his presence. Satisfied that the coast was clear, he continued down the dimly lit hallway, his boots making no sound on the threadbare carpet. The doors on either side of him creaked and groaned, the muffled sounds of pleasure and pain spilling out into the corridor like the cries of the damned. Finally, he reached the door at the end of the hall, the one with the crude painting of a golden coin nailed to its splintered surface. This was Varrick's lair, and Caelum could almost taste the man's fear on the air. He drew Whisper's Bane from its sheath, the blade glinting in the flickering candlelight. The weapon was a thing of dark beauty, its blackened steel etched with intricate runes that seemed to writhe and twist like serpents in the shadows. It was said that the blade could drink a man's soul with a single cut, and Caelum had no reason to doubt the stories. He had seen the truth of it firsthand, many times over. Taking a deep breath, he kicked the door open with a resounding crash, the rotten wood splintering like kindling beneath his boot. The room beyond was a den of debauchery, lit by the flickering light of a dozen candles that cast dancing shadows on the walls. Varrick sat at a table laden with gold and silver coins, a half-naked whore draped across his lap like a discarded rag. His fat face twisted into a mask of surprise and fear as he saw the intruder, his beady eyes widening like those of a cornered rat. "Who in the Nine Hells are you?" he bellowed, his voice thick with drink and fear. Caelum didn't bother to answer. Instead, he lunged forward, Whisper's Bane slicing through the air like a bolt of midnight lightning. The blade found its mark, slicing through the fat man's throat with a wet, gurgling sound that was almost musical in its brutality. Blood sprayed from the wound like a crimson fountain, painting the whore's pale flesh in a macabre pattern of death. The room erupted into chaos as Varrick's men scrambled for their weapons, but it was too late. Caelum was a storm of violence, a whirlwind of steel and shadow that left a trail of bodies in his wake. He moved with a lethal grace, his blade dancing through the air like a deadly ballet, cutting down his enemies with ruthless efficiency. By the time the last man fell, the room was awash in blood, the coins on the table stained a deep, visceral red. Caelum stood in the center of the carnage, panting slightly, his eyes alight with a cold, predatory glee. He wiped the blood from his blade on the whore's discarded chemise, then sheathed it with a satisfied click. The job was done, and he had barely broken a sweat. As he turned to leave, he caught sight of himself in a cracked mirror hanging on the wall. His reflection stared back at him, a twisted parody of a man, his face a mask of blood and grime, his eyes as cold and dead as the void itself. For a moment, he hesitated, a flicker of doubt gnawing at the edges of his mind. Was this really the life he wanted? To be a killer for hire, a shadow in the night, feared and reviled by all who knew his name? But then he thought of the gold waiting for him back at his squalid little hovel, and the cold emptiness inside him swallowed the doubt whole. With a grim smile, he turned and disappeared back into the night, leaving behind him a trail of corpses and the fading echoes of screams. The first coin had been spent, and the game had only just begun. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a cultist who has just escaped his cult. He is dressed in dirty yellow robes and his only possession is a mysterious small (magical!?) mirror he stole from the cult. The story starts with him arriving at an apparently deserted ghost town on the edge of a desert. He has an arrow lodged in his shoulder and is losing his mind due to infection and thirst. ### ASSISTANT The sun hung over the horizon like a molten blade, slicing through the parched air and searing the earth below. The man in the tattered yellow robes staggered forward, each step a defiant curse against the relentless sands that sought to swallow him whole. His name, once a beacon of hope and devotion, now lay forgotten in the recesses of his fever-addled mind, replaced by the insistent throb of pain and the gnawing emptiness of his parched throat. The arrowhead, jutting from his shoulder like an accusing finger, mocked his every staggering step. A parting gift from his former brethren, a reminder of the price of heresy. He clutched at the small, leather-bound object nestled within his robes, the only solace he had managed to steal from the burning wreckage of his past. The mirror, no larger than his palm, pulsed with an unnatural warmth, a beacon in the encroaching darkness that threatened to consume him. The ghost town loomed ahead, a jagged silhouette against the blood-streaked sky. Its buildings, once proud sentinels of a thriving oasis, now stood as skeletal husks, their hollow eyes watching his desperate approach with silent judgment. He imagined he could hear the whispers of the wind, carrying the lamentations of the long-departed souls who had once called this forsaken place home. As he stumbled through the desolate streets, the man's thoughts drifted back to the days before the madness had taken hold. Before the visions of the Yellow God had twisted the minds of the faithful, turning them into ravenous beasts, hungry for the taste of heresy. Before the blood had started to flow like the sweet, life-giving waters of the oasis they had so callously poisoned. "Forgive me," he croaked, his voice a dry rasp that echoed through the empty streets. "Forgive me, for I have sinned." But the gods, if they still listened to the prayers of the damned, remained silent. The man's vision swam, the world around him blurring into a kaleidoscope of heat-haze and dancing mirages. He stumbled into the shadow of a crumbling inn, its sign creaking mournfully in the wind. The faded paint depicted a smiling skull, its empty sockets promising respite from the merciless sun. He collapsed against the weathered wood, his breath coming in ragged gasps. Inside, the air was stale and heavy, thick with the scent of decay and abandonment. He staggered towards the bar, his hand leaving a bloody smear on the dust-coated countertop. Bottles, long empty, stared back at him with accusatory silence. He laughed, a hollow, broken sound that echoed through the empty tavern. "No solace for the damned, eh?" he muttered, his voice barely audible above the pounding of his heart. He slumped to the floor, the coolness of the packed earth a fleeting reprieve from the inferno outside. With trembling hands, he reached into his robes and pulled out the small mirror. Its surface, once a pristine reflection of his own deluded devotion, was now marred with cracks and stains. He traced his finger along the jagged lines, each one a scar etched into his soul. "What have I done?" he whispered, his voice barely audible above the pounding of his heart. The mirror's surface rippled, as if disturbed by an unseen breeze. A face, twisted and malformed, stared back at him, its eyes burning with an otherworldly fire. The man recoiled, a scream lodged in his throat like a shard of broken glass. "You have chosen your path, little moth," the voice hissed, its words slithering into his mind like venomous serpents. "Now, embrace the darkness that awaits." The man's grip on the mirror tightened, his knuckles white against the leather binding. He knew not what malevolent force resided within the object, only that it was his last tether to sanity in a world gone mad. "I will not be consumed," he growled, his voice stronger than he felt. "I will find the truth, no matter the cost." The face in the mirror cackled, a sound that sent shivers down his spine. "The truth? You seek truth in a world drowned in lies and deceit? Foolish mortal. The truth is a mirage, as fleeting and treacherous as the sands that claim your life." The man closed his eyes, willing the darkness to claim him, to end the torment of his waking nightmare. But the voice persisted, its whispers burrowing deeper into his fractured psyche. "Drink, little moth. Quench your thirst with the crimson nectar of the damned. Embrace the embrace of the Yellow God, and find solace in the eternal dance of madness." He opened his eyes, the tavern around him replaced by a sea of writhing bodies, their faces twisted in agony and ecstasy. The air was thick with the coppery tang of blood and the cloying sweetness of decay. He could hear the distant sound of drums, their rhythm pounding in time with his racing heart. "No," he croaked, his voice a feeble protest against the rising tide of insanity. "I will not become like them." The voice in the mirror chuckled, its laughter the sound of a thousand shattered dreams. "It is too late for such feeble denials. You have tasted the forbidden fruit, and now the serpent's venom courses through your veins. You are one of us now, little moth. One of the Fallen." The man's grip on the mirror loosened, his hand falling limply to his side. The world around him faded to black, the laughter of the maddened crowd echoing in his ears like the distant howl of the desert wind. As the darkness claimed him, he whispered one final, desperate prayer to the gods who had forsaken him. "Save me." </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a 'Mercer' who's job is to put the mortally wounded out of their mystery after a battle. He has no conscience about what he does. The story starts the morning after the battle and the moor where it took place is blanketed in mist. Something stirs a flashback in the POV character. ### ASSISTANT The morning after the slaughter, the moor lay still, a shroud of mist clinging to its wounds like a lover's final embrace. The Mercer, a man of indeterminate age and hardened features, trudged through the carnage, his boots sinking into the sodden earth with a sickening squelch. He was no stranger to the symphony of death; the cacophony of crows feasting on the fallen, the distant wails of the dying, and the ever-present stench of rot and iron. It was a melody that had become as familiar to him as his own heartbeat. He paused, his hand instinctively reaching for the hilt of his blade, as a figure emerged from the fog. A soldier, or what was left of one, staggered towards him, his armor rent and bloodied, a gaping wound in his gut spilling his insides like a grotesque offering to the gods. The Mercer's eyes narrowed, assessing the man's chances of survival with the cold detachment of a butcher appraising a diseased hog. He found none. "Please," the soldier gasped, his voice a ragged whisper. "Please... end it." The Mercer didn't hesitate. His blade flashed through the air, a silver streak in the grey dawn, and the man's pleas were silenced forever. He wiped the blood from his weapon with a practiced efficiency, the crimson droplets staining the already soiled rag hanging from his belt. It was a mercy, he told himself, as he had countless times before. A release from the agony that awaited the unfortunate souls left to the mercy of the battlefield. As he continued his grim harvest, the mist seemed to thicken around him, swirling and twisting like the tendrils of some malevolent specter. He could almost hear the whispers of the dead, their voices carried on the wind like the mournful cries of the damned. He shook his head, dismissing the thought as the fatigue of a long night's work playing tricks on his mind. He had no time for superstition. His was a practical trade, one born of necessity in a world where death was as common as the rising sun. But as he moved deeper into the fog, the whispers grew louder, more insistent. He could almost make out words, snatches of prayers and curses, the final breaths of men who had believed in honor and glory until the bitter end. He quickened his pace, eager to be done with this grisly task and find the solace of his campfire. Suddenly, the ground beneath him gave way, and he found himself tumbling into a shallow trench, his breath knocked from his lungs. He scrambled to his feet, his heart hammering in his chest like a trapped animal. The mist swirled around him, obscuring his vision, and for a moment, he was lost in a world of shadows and ghosts. And then he saw it. A pair of eyes, burning like twin embers in the gloom, stared back at him. Eyes he knew all too well. Eyes that had haunted his dreams for years. "No," he breathed, his voice barely audible above the pounding of his heart. "It can't be." But the memories came flooding back, as relentless as the tide. The screams of the dying, the acrid smell of burning flesh, the taste of fear and bile in his mouth. The battlefield of his past reared up before him, a twisted reflection of the one he now walked. He had been a soldier then, a young and idealistic fool who believed in the cause he fought for. He had fought with a fervor that bordered on madness, his blade a reaper's scythe cutting down his enemies without mercy or remorse. And then he had seen her, a slip of a girl no older than his own sister, her eyes wide with terror as she stared down the point of his sword. He had hesitated, just for a heartbeat, and in that moment, the world had turned to ash around him. A spear had pierced his side, and as he fell, he had watched as his comrades cut her down, her screams echoing in his ears long after the darkness had claimed him. When he had awoken, days later, he had been a changed man. The wound in his side had healed, but the one in his soul remained raw and festering. He had abandoned his colors, his name, and his very humanity, becoming the Mercer, the bringer of final mercy to those who had no hope of survival. But now, faced with the ghost of his past, he felt the first stirrings of something he had long thought dead. Guilt. "I'm sorry," he whispered to the empty air, the words sounding hollow and meaningless. "I'm so sorry." The eyes in the mist seemed to flicker, and for a fleeting instant, he thought he saw a tear trace down her blood-streaked cheek. Then, as quickly as they had appeared, they were gone, swallowed up by the swirling fog. The Mercer stood there for a long moment, his breath coming in ragged gasps. He knew he should continue his grim work, but something held him rooted to the spot. He was a man haunted by the ghosts of his past, and he knew that no amount of killing would ever free him from their cold embrace. With a heavy sigh, he sheathed his blade and began the long, lonely walk back to his camp. The moor lay silent around him, the only sounds the distant cries of the carrion birds and the mournful song of the wind. But in the depths of his soul, the whispers of the dead continued to echo, a constant reminder of the man he had once been, and the monster he had become. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a man awaiting execution for a crime he didn't actually commit. His cell mate is a much larger man, who is a notorious murderer and rapist. One evening they overhear the guards talking about some "... threat from the North" but don't catch the whole conversation. The next morning they awake to find the guards have all fled and now face a new predicament: how to get out of the locked cell before they die of thirst or starvation. ### ASSISTANT The stench of piss and shit clung to the air like a miasma, choking the life out of the damp stone walls of the cell. The flickering torchlight cast dancing shadows across the grimy floor, illuminating the huddled forms of the damned. Among them, two men sat in silence, chained to the opposite walls, their eyes locked in a wordless conversation of resignation and despair. The smaller of the two, a wiry man with a face etched with lines of hardship and a mop of greasy brown hair, was known as Rennick the Swift. A thief by trade, he'd been caught with his hand in the wrong pocket one too many times. This time, however, he swore on the rotting cock of the Bloated God that he was innocent. But in the eyes of the law, and the bloodthirsty mob that had demanded his head, innocence was as worthless as a whore's virtue. His cellmate, a hulking brute of a man with a face like a shattered boulder and arms thick as tree trunks, was known simply as Grimgut. The name alone was enough to make grown men piss themselves in terror. He'd earned his reputation through a lifetime of brutality, his hands stained redder than a whore's sheets on a festival night. Murder, rape, and worse crimes than Rennick cared to imagine were whispered in hushed tones whenever his name was spoken. As the hours dragged on like a dying man's last breaths, the distant clamor of the bustling prison faded into an uneasy silence. The usual sounds of clanking chains, wailing prisoners, and guards' boots echoing down the corridors were replaced by an oppressive quiet that seemed to press in on them like a suffocating shroud. Rennick's stomach twisted with a gnawing unease. He'd spent enough nights in cells like this to know that silence was never a good sign. He glanced at Grimgut, who returned his gaze with a stoic indifference that made Rennick's skin crawl. The big man seemed unperturbed by the sudden stillness, his eyes half-lidded as if he were on the verge of sleep. "You hear that?" Rennick whispered, his voice barely audible over the dull thud of his racing heart. Grimgut grunted, a sound that could have been agreement or indifference. "Sounds like a fucking tomb down there," Rennick continued, his voice a little louder this time. "Where'd all the guards go?" Another grunt was the only response he received. Rennick's unease grew into a writhing serpent in his gut. He strained his ears, trying to catch any hint of movement from beyond the thick oak door that separated them from the outside world. But all he heard was the distant drip of water from a leaky pipe and the labored breathing of the other prisoners. "Think they've forgotten about us?" he asked, more to break the suffocating silence than out of any real hope. Grimgut snorted, a sound that might have been a laugh if it hadn't been so devoid of humor. "Forgotten? More like they've fucking run off and left us to rot." Rennick's heart sank like a stone in a bottomless well. He'd heard stories of guards abandoning their posts during times of unrest, leaving the prisoners to fend for themselves. It wasn't unheard of, but it was rare. And it never ended well for those left behind. "What's got them spooked?" he asked, more to himself than to his monstrous cellmate. Grimgut shrugged, the chains rattling against the stone wall. "Don't know. Don't fucking care." Rennick didn't believe him. The big man might have been a monster, but he wasn't stupid. He'd heard the same hushed whispers as everyone else, the muttered words of "threat" and "North" that had filtered down from the guard's quarters like a plague. Whatever it was, it had scared the piss out of the hardened bastards enough to make them abandon their posts. As the night wore on, the silence grew heavier, suffocating. Rennick's thoughts turned to his impending execution, scheduled for dawn's first light. He'd always imagined he'd meet his end with a blade in his hand and a defiant sneer on his lips, not chained to a fucking wall like a whimpering dog. He closed his eyes, trying to block out the stench and the darkness, and drifted into an uneasy sleep filled with nightmares of cold steel and the screams of the damned. He awoke with a start, his heart hammering in his chest like a blacksmith's hammer. The cell was still shrouded in silence, the only sound the distant tolling of a bell marking the passing of the hours. He squinted at the small slit of light that served as their only window to the outside world, trying to gauge the time. It was still dark, but the quality of the darkness had changed, the deep black of night giving way to the sickly gray of pre-dawn. His stomach twisted with a newfound urgency. If the guards hadn't returned by now, they weren't coming back. Which meant they were well and truly fucked. "Grimgut," he hissed, nudging the sleeping behemoth with his foot. "Wake up, you lump of shit." The big man grunted and stirred, his eyes slowly opening to reveal two pools of bottomless darkness. "What?" he growled, his voice like gravel being ground under a cartwheel. "They're not coming back," Rennick said, his voice trembling despite his best efforts to keep it steady. "We're fucking trapped in here." Grimgut grunted again, but this time there was a hint of understanding in the sound. He looked around the cell, his gaze lingering on the empty guard's alcove and the heavy door that sealed them in like rats in a trap. "Fuck," he muttered, the word echoing off the walls like a death knell. Rennick nodded, his throat suddenly dry as the desert sands. "Aye. Fuck is right." They sat in silence for what felt like an eternity, each man lost in his own thoughts of impending doom. Rennick's mind raced, trying to come up with a plan, any plan, to get them out of this hellhole before they starved to death or went mad from thirst. "We could try to break the door down," he suggested, already knowing it was a futile idea. The door was made of solid oak, reinforced with iron bands as thick as his wrists. It would take a battering ram and a dozen strong men to even dent it, let alone break it open. Grimgut snorted. "With what? Your pretty little fists?" Rennick scowled, his pride stung. "I've picked locks harder than that with a fucking hairpin." The big man barked a laugh that sounded more like a bear's growl. "Aye, and I've shat out bigger men than you. We're fucked, Swift. Best get used to it." Rennick clenched his fists, his nails digging into his palms. He refused to accept it. He'd cheated death more times than he could count, slipping through the city watch's fingers like a greased eel. He wasn't going to let a locked cell be the end of him. "There's always a way out," he muttered, more to himself than to Grimgut. "There has to be." Grimgut just grunted, his eyes sliding closed once more as he settled back into his thoughts. Rennick's gaze drifted around the cell, searching for any weakness, any hint of escape. His eyes landed on the small grate in the ceiling, barely large enough for a rat to squeeze through. It was a long shot, but it was better than sitting here and waiting for death to claim them. "The vent," he said, the words coming out in a hoarse whisper. "We could try the vent." Grimgut's eyes snapped open, a flicker of interest lighting them for the first time since they'd been thrown in this hellhole. "You think you can fit through that?" Rennick looked up at the tiny opening, then down at his own wiry frame. "Maybe. If I can find something to pry these fucking manacles off." Grimgut grunted again, but this time there was a hint of approval in the sound. "Might be worth a try." With renewed determination, Rennick began to search the cell, his eyes scanning every inch of the filthy floor for anything that might serve as a makeshift tool. He found a rusted nail half-buried in the muck, its point dulled from years of neglect. It would have to do. As he worked on the manacle around his wrist, the sound of distant screams and the clash of steel drifted down from above, cutting through the silence like a knife through rotten meat. Whatever was happening out there, it wasn't good. "Looks like our luck's just gotten worse," Grimgut rumbled, his voice heavy with resignation. Rennick didn't bother to respond. He was too busy trying to free himself from the chains that held him prisoner. If they were going to die, he'd be damned if he was going to do it sitting here like a lamb waiting for the slaughter. As the first rays of dawn began to filter through the grimy window, casting feeble tendrils of light into the cell, Rennick finally managed to pry the manacle free. He flexed his cramped fingers, wincing at the pain that shot up his arm. It was a small victory, but it was something. "Now what?" Grimgut asked, his voice laced with a resignation that made Rennick's stomach churn. Rennick looked up at the vent, then back at his hulking cellmate. "Now," he said, a grim smile twisting his lips, "we see if I'm as slippery as they say." And with that, he began to climb. </details> ## Dark sci-fi stories <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Space Opera Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The setting is this: a colony ship heading out of the solar system has been pulled off course with no hope of getting to the planned destination. The people know this and society has broken down completely due to the grim prospects they are facing. The POV character for chapter 1 is an orphan. ### ASSISTANT Chapter 1: Adrift in the Void The cold, unforgiving vacuum of space pressed against the hull of the colony ship, a constant reminder of the infinite nothingness that surrounded them. Inside, however, the atmosphere was far from tranquil. The once bustling corridors echoed with the cries of the desperate and the clamor of the ruthless, a stark contrast to the harmony that was once carefully cultivated. The ship, christened as the 'Prometheus's Dawn', now felt more like a floating tomb, drifting aimlessly through the cosmic abyss. In the midst of this chaos, a lone figure moved with a practiced stealth, their footsteps muffled by the worn-out soles of their boots. They were small, almost unnoticeable, a trait that had become a survival instinct in the harsh reality of the ship. Their ragged clothes hung loosely on their malnourished frame, a testament to the rationing that had become a cruel norm. This was Kai, an orphan of the stars, a child of the void. Kai's thoughts drifted back to the days before the incident, when the ship hummed with the promise of a new beginning. The adults spoke of verdant planets and boundless resources, a paradise waiting to be claimed by the desperate hands of humanity. They painted vivid images of a future where hunger and strife were mere relics of a distant past. But those dreams had shattered like the fragile glass domes of the hydroponic gardens when the ship's engines had failed, leaving them adrift in the cold embrace of the void. "Stupid adults," Kai muttered under their breath, their voice barely audible over the distant wails of despair. "Promised us a new home, and this is what we got." Their eyes, a deep shade of hazel, scanned the dimly lit corridor, searching for any signs of danger. The once pristine walls were now marred with graffiti, crude symbols of the various factions that had risen from the ashes of the old order. The once orderly rows of cabin doors were now twisted and broken, the remnants of countless raids and skirmishes. A sudden movement caught Kai's attention, and they pressed themselves against the cold metal wall, heart pounding in their chest. A group of raiders, easily identifiable by the makeshift armor fashioned from salvaged ship parts, stalked down the corridor, their eyes scanning the shadows for any sign of prey. Kai held their breath, willing themselves to become one with the darkness. The memories of the first raid still haunted their dreams. The screams of their parents, the acrid smell of burning plastic, and the cold, unfeeling grip of the vacuum as they were cast out into the cold emptiness of space. They had been lucky, or perhaps the universe had a twisted sense of mercy, as a passing scavenger crew had found them floating among the debris, clinging to life by a thread. Since then, Kai had learned to survive in this new world, a world where trust was a luxury few could afford and survival was the only currency that mattered. They had joined one of the many gangs that had sprung up, a loose-knit family of orphans and outcasts who had banded together for protection. But even among thieves, there was a hierarchy, and Kai was at the very bottom. The raiders passed by, oblivious to the trembling form hidden in the shadows. Kai let out a silent sigh of relief and continued their trek, their destination a closely guarded secret even to themselves. They had heard whispers of a hidden cache of supplies, a last vestige of the ship's dwindling reserves. It was a dangerous rumor to chase, but desperation was a powerful motivator. As they navigated the labyrinthine corridors, Kai couldn't help but wonder about the crew, the ones who were supposed to be in control. They were like ghosts now, rarely seen and spoken of in hushed whispers. Some said they had barricaded themselves in the upper decks, living out their final days in relative comfort while the rest of the passengers starved. Others whispered darker rumors, of cannibalism and madness. Kai shook their head, pushing the thoughts aside. They couldn't afford to dwell on such things. Survival was all that mattered now. Finally, after what felt like an eternity, Kai reached their destination: a nondescript maintenance hatch, hidden behind a tattered tapestry that once depicted a serene alien landscape. With trembling hands, they pried open the panel, revealing a narrow maintenance shaft beyond. They hesitated for a moment, the darkness beckoning like the void outside, but hunger won out over fear. Crawling into the cramped space, Kai felt a sense of familiarity wash over them. This was their domain, a network of hidden passageways that crisscrossed the ship like veins in a dying body. It was here that they had learned to evade the watchful eyes of the raiders and scavengers, here that they had found scraps of food and precious water to sustain themselves. The journey was long and arduous, the air growing thinner and colder the further they ventured. The dim glow of their makeshift flashlight flickered, casting eerie shadows on the rusted pipes and exposed wiring that surrounded them. Kai's breath came in ragged gasps, their lungs protesting the thinning air. Just when they thought they couldn't go on any longer, a faint light appeared in the distance, a beacon in the darkness. Kai's heart leaped in their chest, hope surging through their weary body like a shot of adrenaline. They scrambled forward, ignoring the pain that shot through their limbs. As they emerged into the small, dimly lit chamber, Kai's eyes widened in disbelief. Before them lay a veritable treasure trove of supplies, enough to sustain their gang for months, maybe even years. Food, water, medical supplies, and even a few precious energy cells lay scattered haphazardly across the floor. But amidst the wealth of resources, one item caught their eye: a small, battered data chip, half-buried under a pile of discarded ration packs. Curiosity piqued, Kai carefully extracted it, their fingers trembling with anticipation. As they inserted the chip into a nearby terminal, the screen flickered to life, revealing a message that would change everything they thought they knew about their doomed voyage. "To whoever finds this," a familiar voice crackled through the static, "I am Captain Sarah Reynolds, commanding officer of the Prometheus's Dawn. I regret to inform you that our mission has failed..." </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is an astronaut living on a space station orbiting the earth. The day starts off normally, but then him the others notice the mushroom clouds and they have to face inevitability of the situation they are in. ### ASSISTANT Chapter 1: Ashes to Orbit The cold, unforgiving vacuum of space pressed against the reinforced viewport like an unseen titan, its icy fingers reaching for the warmth and life that pulsed within the confines of the Orbital Habitat Elysium. Dr. Aiden Hale, a seasoned astronaut with eyes that had seen the birth and death of countless stars, gazed out into the abyss with a sense of detached reverence. The Earth, a blue-green marble suspended in the void, spun lazily below, its familiar beauty marred by the ever-expanding tendrils of humanity's insatiable grasp. A soft chime echoed through the station's intercom, jolting Aiden from his contemplative reverie. He sighed, the sound muffled by his suit's respirator, and turned away from the sight that had once filled him with awe and wonder. The daily routine of life aboard the station beckoned, its monotonous grip as relentless as the pull of gravity they had left behind. As he floated through the sterile corridors, the familiar weightlessness a constant reminder of his isolation, Aiden's thoughts drifted to his wife and daughter back on Earth. How many sunrises had he missed? How many birthdays? The guilt gnawed at him like a persistent rodent, burrowing deeper with each passing day. He pushed the thoughts aside, focusing instead on the tasks at hand. There would be time for self-pity later, if there was time at all. Entering the command module, Aiden was greeted by the familiar faces of his crewmates, each locked in their own silent battles with the isolation of their celestial prison. Commander Liu, a stern-faced woman with eyes that had seen too much, nodded curtly in his direction before returning her attention to the myriad of screens and displays that dominated the room's walls. Dr. Patel, the station's resident biologist, floated nearby, her delicate fingers dancing across a holographic interface as she analyzed the latest batch of samples from the Martian terraforming project. And Dr. Chen, the quiet engineer, was hunched over a malfunctioning life support unit, muttering curses in a language Aiden had long since given up trying to decipher. "Morning, doc," Liu's voice crackled through the comms, her tone devoid of any semblance of warmth. "Anything interesting on the horizon?" Aiden forced a smile that felt as fake as the gravity that held him in place. "Just another day in paradise, commander." Liu grunted in response, her focus never wavering from the screens. A tense silence settled over the module, broken only by the soft hum of machinery and the occasional beep of an alert. It was then that the world ended. At first, it was nothing more than a flicker on the farthest edge of the main viewport, a fleeting aberration easily dismissed as a cosmic anomaly or a trick of the eye. But as the seconds ticked by, the flicker grew, blossoming into a brilliant, searing bloom of light that seemed to consume the very fabric of space itself. "What the hell is that?" Chen's voice crackled over the comms, laced with a panic that sent a cold shiver down Aiden's spine. Liu's fingers flew across her console, her eyes wide with disbelief. "Magnify sector Alpha-Tango-Niner," she barked, her voice betraying the barest hint of fear. The viewport obeyed, the Earth's serene visage giving way to a scene straight from the depths of hell. A mushroom cloud, its cap a sickening shade of radioactive green, rose like an angry god from the surface, casting a pall of death and destruction across the once-pristine landscape. "Oh my God," Patel whispered, her hands trembling as she crossed herself. "What have we done?" Aiden's heart felt like a lead weight in his chest as the implications of what they were witnessing sank in. The unthinkable had happened. The fragile peace that had held humanity together for generations had finally shattered, and now the Earth burned in the fires of its own hubris. "Commander, we need to—" he began, but Liu silenced him with a raised hand. "I know, Hale," she said, her voice brittle as glass. "I know." The comms crackled to life, the frantic voice of Mission Control cutting through the oppressive silence like a knife. "Elysium, this is Houston. Do you copy? Repeat, do you copy?" Liu's fingers trembled as she keyed the mic. "Houston, this is Elysium. We... we see it." Static filled the airwaves for what felt like an eternity before a voice, strained and broken, replied. "Understood, Elysium. Initiate Protocol Omega. Godspeed." The line went dead, leaving only the haunting silence of the void and the distant rumble of nuclear Armageddon. Aiden's mind raced as he tried to process the enormity of what had just transpired. Protocol Omega. The words echoed in his mind like a funeral dirge, their meaning all too clear. They were alone. Cut off from Earth, their lifeline severed in a heartbeat. "What do we do now?" Patel's voice was barely audible, her usually confident demeanor shattered like so much space debris. Liu's gaze remained fixed on the viewport, her eyes distant and haunted. "We survive," she said, her voice devoid of emotion. "That's all we can do now. We survive." As the station continued its silent orbit around the dying planet, Aiden couldn't help but wonder if survival was truly worth the price they were about to pay. Trapped in the cold embrace of space, with only each other and the ghosts of their pasts for company, the crew of Elysium faced an uncertain future filled with darkness and despair. But as the last vestiges of humanity's homeworld faded into the cold, uncaring void, one thought burned bright in Aiden's soul, a defiant beacon in the encroaching darkness. They would not go quietly into the night. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a worker constructing a biodome when a terrible accident happens. Make the story very dark; no happy ending here... ### ASSISTANT The air was thick with the acrid scent of molten steel and burnt ozone as Kaelen, a wiry figure clad in a tattered exosuit, welded the final seam of the biodome's outer shell. Sweat beaded on his brow, trickling down the inside of his helmet, stinging his eyes. He cursed under his breath, the words lost to the cacophony of machinery that surrounded him. The planet's harsh, alien sun cast an eerie, pulsating glow through the translucent dome, casting long, distorted shadows across the metallic floor. *This is it,* he thought, *the last damn piece.* For months, he and his crew had labored tirelessly in the unforgiving embrace of this desolate world, terraforming the barren wasteland into something resembling a semblance of home. A new Eden for the desperate masses of Earth, choking on their own excesses. A final refuge from the ravages of a dying planet. A sudden tremor shook the platform beneath his magnetic boots, sending a jolt of adrenaline coursing through his veins. Kaelen's grip tightened on the welding torch, his heart pounding like a trapped animal in his chest. He glanced up, eyes narrowing behind the scratched visor, searching for the source of the disturbance. "What the hell was that?" he barked into the comms, his voice crackling with static. Silence answered him, save for the relentless hum of the construction drones. *Probably just another damn quake,* he tried to reassure himself. *This rock is more unstable than a politician's promise.* But deep down, he knew it was something more. A primal instinct, honed by years of working on the fringes of civilization, screamed at him to run. To abandon this cursed project and flee back to the relative safety of the orbiting station. Ignoring the warning, Kaelen pressed on, determined to finish his task. He was so close, so tantalizingly close to finally earning his ticket back to the green fields and blue skies of Earth. To see his wife's smile and hold his newborn child in his arms. The comms crackled to life, shattering the oppressive silence. "Kaelen, we've got a breach in sector four!" The voice was panicked, barely recognizable as his foreman, Tamsin. "Get your ass to the evac pod, now!" His heart sank like a stone in his stomach. A breach meant one thing: the atmosphere was venting. In minutes, the entire dome would be exposed to the vacuum of space, and with it, any hope of life on this forsaken rock. "I'm almost done here," he protested, his voice strained. "I can seal it off, buy us some time." "Dammit, Kaelen! That's an order!" Tamsin's voice was a whipcrack of authority. "We can't risk losing you too!" The words stung like a physical blow. *Losing me too?* The unspoken implication hung heavy in the air, like the oppressive weight of the dying planet's atmosphere. With a resigned sigh, Kaelen relinquished his grip on the torch, watching it drift away in the low gravity. He turned, his movements slow and deliberate, towards the distant glow of the emergency exit. As he navigated the labyrinthine corridors of the half-finished biodome, the gravity of their situation began to sink in. The years of backbreaking labor, the sacrifices made, the friends lost to accidents and the unforgiving void, all for this. All for a dream that now seemed as fragile as the thinning atmosphere around them. A sudden, deafening roar drowned out his thoughts, accompanied by a blinding flash of light. The world around him exploded into a maelstrom of twisted metal and screaming alarms. He was thrown violently against the nearest bulkhead, stars dancing before his eyes as the breath was knocked from his lungs. Through the chaos, he caught a fleeting glimpse of the breach, a gaping maw in the dome's hull, spewing precious air into the cold void beyond. The sight was both beautiful and terrifying, like staring into the very maw of oblivion. *This is it,* he thought, resignation settling over him like a shroud. *This is how it ends.* As the darkness closed in, Kaelen's last thoughts were of his wife, her laughter like a distant melody, and the child he would never hold. He wondered if they would remember him, a faceless martyr to humanity's insatiable hunger for survival. Or would they simply erase him, like so many others, from the annals of history? A mere footnote in the tragic tale of mankind's final, desperate gasp for existence? As the cold embrace of the void claimed him, Kaelen found no solace in the stars that now surrounded him. Only the bitter taste of regret and the crushing weight of a dream shattered, like the fragile biodome that had been their last hope. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is manning a listening station in deep space. What he hears over the radio chills him to the bone. ### ASSISTANT In the abyssal silence of the cosmos, where the whispers of distant stars were but a muted lullaby, Dr. Alistair Kovacs sat hunched over his console, a solitary sentinel in the watchtower of the void. The dimly lit confines of the listening station, a claustrophobic cocoon of cold steel and humming machinery, pressed in around him like the suffocating embrace of a nebula. His eyes, bloodshot and weary, flickered across the myriad of screens, each one a window into the unfathomable expanse that stretched out in all directions, a canvas of infinite blackness speckled with the faintest of celestial brushstrokes. The monotony of his vigil was a constant companion, a relentless tormentor that gnawed at the edges of his sanity like the insidious tendrils of some unseen cosmic predator. Days blurred into weeks, weeks into months, and still, the silence persisted, a deafening cacophony that threatened to consume him whole. But today, as the station drifted along its predetermined path, a fragile speck of human ingenuity adrift in the vast and uncaring sea of the universe, the silence was broken. A faint, almost imperceptible, blip on the farthest reaches of the spectrum analyzer caught his attention. Alistair's heart skipped a beat, a sudden jolt of adrenaline coursing through his veins like a bolt of cosmic lightning. He leaned in closer, his breath fogging the cold glass of the monitor as he squinted at the anomaly, his mind racing with a thousand possibilities, each more terrifying than the last. Could it be interference? A rogue asteroid, its dying song resonating through the vacuum of space? Or perhaps, the long-awaited signal from the Proxima Centauri mission, a beacon of hope from the farthest reaches of mankind's fragile grasp on the stars? He knew the odds were against him, knew that the vast majority of these blips and static-laced whispers were nothing more than the cosmic equivalent of white noise. But still, he clung to the faintest sliver of hope, the irrational belief that this time, this one fleeting moment in the endless march of eternity, would be different. His trembling fingers danced across the console, coaxing the dormant machinery to life as he initiated the complex sequence of algorithms designed to filter out the cosmic static and isolate the signal. The air in the cramped compartment seemed to thicken, the very atoms around him holding their breath in anticipation. Minutes stretched into hours, or so it seemed to Alistair, as he waited with bated breath for the results. The silence in the station was now a palpable thing, a suffocating weight that pressed down upon him like the crushing embrace of a dying star. And then, as suddenly as it had appeared, the signal clarified. At first, it was nothing more than a garbled mess of static and indecipherable noise, a chaotic symphony of cosmic static that threatened to drown out any semblance of coherence. But as the algorithms worked their digital alchemy, the noise began to recede, giving way to something else entirely. A voice. A human voice, crackling and distorted, but unmistakably so, echoed through the speakers, its haunting cadence reverberating through the very bones of the station. "...terminus...incoming...evacuate...immediately..." A chill ran down Alistair's spine, a cold, icy tendril that seemed to burrow its way into the very core of his being. He knew that voice. He had heard it countless times during the endless hours of training, the mandatory drills, and the sleepless nights spent poring over mission briefings. It was the voice of Dr. Amelia Hartford, the lead astrophysicist aboard the research vessel, the Icarus. And she was screaming for help. His heart pounding in his chest like the relentless beat of a war drum, Alistair frantically worked the console, desperate to extract any semblance of meaning from the garbled transmission. The words came in fits and starts, broken and distorted, as if the very fabric of space-time itself was fighting to keep them hidden. "...unknown...object...approaching...terminus...impossible...velocity...evacuate...immediately..." A cold sweat beaded on his brow as the implications of her words sank in. An unknown object, approaching at impossible speeds, heading straight for their designated research sector. It was a scenario they had all been trained for, a nightmare scenario that had been relegated to the dusty annals of worst-case-scenario protocols and late-night horror stories. But it was happening. It was real. His mind raced as he tried to process the information, his thoughts a jumbled mess of panic and disbelief. What could be out there, hurtling through the void at such speeds that it defied the very laws of physics they had spent their lives trying to understand? And more importantly, what did it mean for the crew of the Icarus, and for the fragile web of humanity that clung to the distant, dying embers of the Milky Way like a drowning man clutching at the last vestiges of a sinking ship? As the message continued to unfold, the pieces of the puzzle began to coalesce in his mind, forming a picture so horrifying, so utterly incomprehensible, that he found himself questioning the very nature of reality itself. "...not...natural...origin...unknown...hostile...intent...evacuate...immediately..." The words echoed in his mind like the tolling of a funeral bell, each syllable a nail in the coffin of his sanity. An alien object, of unknown origin and hostile intent, was bearing down on their position at speeds that defied all reason and understanding. And they were alone. Cut off from the relative safety of the Terran Coalition by light-years of empty space, the Icarus and her crew were the vanguard of humanity's desperate push into the unknown, the first line of defense against the unimaginable horrors that lurked in the shadows of the cosmos. And now, it seemed, those horrors had found them. A wave of nausea washed over him as the full weight of his responsibility crashed down upon him like the crushing gravity of a collapsing star. He was the eyes and ears of the Coalition in this desolate corner of the galaxy, the thin line that separated the fragile light of civilization from the encroaching darkness of the void. And he had failed. With trembling hands, he initiated the emergency protocols, sending out a distress signal on all frequencies, a desperate plea for help that would take years, decades even, to reach the nearest outpost of humanity. But it was all he could do. As the station's engines roared to life, the harsh glare of the thrusters casting eerie shadows across the cramped confines of the control room, Alistair couldn't help but feel a sense of overwhelming despair settle over him like a suffocating shroud. They were alone. And the darkness was coming. </details> ## Miscellaneous stories <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty post-apocalyptic novel in the style of Cormac McCarthy. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a technician in a nuclear bunker 18 months after the war. The 12 inhabitants of the bunker have been expecting to start to be able to leave soon, but it has just been discovered that at least where they are, salted cobalt bombs were used. It's clear now that either all 12 will die when the resources run out and their is barely enough for 6 to make it to 5 years and have any hope of leaving. After 18 months of calm everything breaks lose today. ### ASSISTANT In the bowels of the earth, where the last vestiges of mankind clung to a semblance of existence, the technician sat hunched over his console, the flickering screen casting an eerie pallor upon his gaunt visage. The air was stale, heavy with the weight of recycled breaths and the oppressive silence that had become the bunker's mournful hymn. The digital readouts danced before his weary eyes, their once-familiar patterns now twisted into a macabre waltz of decay and despair. He had been tasked with monitoring the world above, a world that had long since been reduced to a desolate wasteland, a charred canvas upon which the hand of man had scrawled its final, violent masterpiece. The Geiger counter ticked away in the background, its rhythmic pulse a constant reminder of the invisible specter that haunted their every waking moment. The technician's thoughts drifted back to the day the sirens had first wailed, their shrill cacophony tearing through the fabric of their mundane lives like the harbingers of doom they had proven to be. He remembered the panicked scramble, the desperate faces pressed against the reinforced glass as they were sealed away from the inferno that consumed the surface. He remembered the promises of salvation, the assurances that they would be safe, that they would be the ones to rebuild when the dust settled and the radiation subsided. But the dust had never settled, and the radiation had only grown more insidious, creeping into every crevice of their subterranean sanctuary like a malevolent fog. The intercom crackled to life, shattering the oppressive silence. The voice that emanated from the speaker was cold, devoid of any semblance of hope or compassion. It was the voice of their leader, the man who had led them to this underground tomb with promises of salvation now rendered as hollow as the husks of the world above. "Attention, all personnel. I regret to inform you that our latest readings have confirmed our worst fears. The surface has been irradiated beyond our most pessimistic projections. It appears that salted cobalt bombs were used in the final stages of the conflict. The soil is now saturated with lethal levels of radioactivity. Our estimates indicate that the surface will be uninhabitable for centuries, if not millennia." A collective gasp echoed through the bunker's sterile corridors, the air thickening with the acrid tang of despair. The technician's heart sank like a stone in the pit of his stomach, the weight of the news settling upon him like a leaden shroud. He knew what this meant. They had all known, deep down, that their chances were slim, but to have it confirmed in such unequivocal terms was a blow that threatened to snuff out the last flickering embers of hope that still burned within them. He glanced around the cramped control room, his gaze settling on the faces of his fellow survivors. The doctor, her once-steady hands now trembling as she clutched a worn stethoscope; the engineer, his eyes vacant and distant as he stared at the schematics of a world that no longer existed; the botanist, her green thumbs now useless appendages in a world devoid of life. Each face was a testament to the slow, insidious decay that had taken root in their souls, gnawing away at their resolve like the relentless radiation that now surrounded them. The intercom crackled again, the leader's voice now tinged with a note of resignation. "We have convened an emergency council meeting. All senior personnel are to report to the command center immediately." The technician rose from his chair, his joints creaking in protest. He knew what awaited them in that cold, sterile room. They had all known, in the darkest recesses of their minds, that this day would come. They had prepared for every contingency, every possible outcome, save for this one. As he made his way through the dimly lit corridors, the whispers of his fellow survivors followed him like the spectral echoes of the dead. Fear, anger, and desperation mingled in the stale air, a cacophony of emotions that threatened to suffocate him. The command center was a hive of frantic activity when he arrived. The council members were huddled around a large table, their faces etched with lines of worry and fatigue. Maps and schematics were strewn haphazardly across the surface, their once-precise lines now blurred and meaningless in the face of the new reality that confronted them. The leader looked up as he entered, his eyes meeting the technician's with a heavy, resigned gaze. "We have run the numbers," he said, his voice devoid of emotion. "Our supplies will last, at best, another five years. With our current population, that means..." He trailed off, the implication hanging in the air like a noxious cloud. The technician didn't need him to finish the sentence. They all knew what it meant. They had been living on borrowed time since the day they had descended into this subterranean tomb, and now the debt had come due. The room fell into a tense silence, broken only by the distant hum of the life support systems that kept the suffocating embrace of the void at bay. The technician's mind raced, searching for a solution, any solution, that would spare them from the grim fate that now loomed before them like a specter. But there was none to be found. The council members exchanged glances, each one a silent plea for someone else to offer a miracle that they all knew was beyond their reach. Finally, the doctor spoke, her voice trembling with barely contained emotion. "We...we can't just...just let half of us die." The engineer shook his head, his eyes downcast. "We don't have a choice. The math doesn't lie." The botanist let out a strangled sob, her hands clenching into fists at her sides. "There has to be another way. We can't just...we can't just give up." The technician closed his eyes, the weight of their despair pressing down on him like a mountain. He knew what had to be done, but the thought of voicing it aloud was like swallowing broken glass. "We...we have to draw lots," he croaked, the words sticking in his throat like bile. The room erupted into a cacophony of protests and denials, but the leader silenced them with a raised hand. "It's the only way," he said, his voice flat and emotionless. "It's the only fair way." And so it was decided. In the depths of their underground tomb, as the world above them slowly succumbed to the slow, inexorable embrace of nuclear winter, the last remnants of humanity would play a macabre game of chance, each one praying to whatever gods still listened that their number would not be called. As the technician returned to his station, the weight of their impending doom settling upon him like a leaden shroud, he couldn't help but wonder if this was truly the end. If this was the legacy that mankind would leave behind, a final, desperate gasp in the face of an uncaring universe. Or perhaps, in the end, it was simply the price they had to pay for daring to reach for the stars. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a historical mystery novel in the style of Arturo Pérez-Reverte and Umberto Eco. The story starts off in California during the 1948 Gold Rush, before moving to London in the mid 1970s. There should be a mysterious 'occult' item that connects these two time periods. ### ASSISTANT Chapter One: The Golden Veil The year was 1848, and the air in California was thick with the scent of gold and desperation. The Gold Rush had swept through the land like a wildfire, consuming everything in its path and leaving behind a trail of broken dreams and empty pockets. But for a select few, it was a time of unimaginable wealth and power. In the heart of this frenzied landscape, nestled between the towering sequoias and the rushing waters of the Sacramento River, lay the small mining town of El Dorado. Founded by a group of Spanish prospectors who believed they had stumbled upon the legendary city of gold, it had quickly grown into a bustling hub of activity, teeming with fortune seekers from all corners of the world. Among these hopeful souls was a man named Alistair Blackwood. Tall and lean, with piercing blue eyes that seemed to bore into the very soul of anyone who dared to meet his gaze, he was a figure of mystery and intrigue. Some said he was a former British naval officer turned prospector, while others whispered that he was a gentleman thief who had fled the clutches of justice in his homeland. Whatever his true origins, one thing was certain: Alistair Blackwood had a knack for finding gold where others saw only dirt and rock. It was on a sultry summer evening, as the sun dipped below the horizon and the stars began to twinkle like scattered diamonds in the velvet sky, that Alistair's life took an unexpected turn. He had spent the day panning the river's murky depths, his calloused hands sifting through countless pebbles and nuggets of fool's gold, when suddenly, his pan struck something solid and unyielding. With trembling fingers, he carefully extracted the object from the muddy water, holding it up to the flickering light of his lantern. It was a small, intricately carved amulet, fashioned from a metal that shimmered like liquid gold in the firelight. Strange symbols etched into its surface seemed to dance and writhe, as if imbued with a life of their own. A low whistle escaped his lips as he examined the trinket more closely. He had seen many such baubles in his travels, but none that had ever stirred within him the strange mixture of awe and unease that now coursed through his veins. This was no ordinary piece of miner's luck; this was something far older, far more valuable... and infinitely more dangerous. Fast forward to London, 1975. The city was a far cry from the wild frontier of Alistair Blackwood's time. Gone were the dusty trails and ramshackle saloons, replaced by sleek skyscrapers and bustling thoroughfares. Yet beneath the veneer of modernity, the echoes of the past still lingered, whispering their secrets to those who knew where to listen. One such individual was Dr. Eleanor "Ella" Winston, a young and ambitious archaeologist with a penchant for the arcane and the esoteric. Fresh from her latest expedition to the Egyptian desert, she had returned to the hallowed halls of the British Museum, eager to immerse herself in the dusty tomes and forgotten relics that held the key to mankind's forgotten history. It was during one such late-night foray into the museum's restricted archives that she stumbled upon a leather-bound journal, its pages yellowed with age and crumbling at the edges. The faded script on the cover bore a single name: Blackwood. Intrigued, Ella settled herself into a worn leather armchair and began to read. Hours passed like minutes as she became lost in the tale of Alistair Blackwood and his quest for the fabled Golden City. The more she read, the more she felt an inexplicable connection to the enigmatic prospector, as if the threads of their lives were somehow entwined across the vast tapestry of time. And then, as if drawn by an unseen hand, her fingers brushed against a small, hard object hidden within the pages. Her heart skipped a beat as she carefully extracted it, her breath catching in her throat as she recognized the familiar shape and weight. It was the amulet. The same one that had once belonged to Alistair Blackwood, now resting in her trembling hands, a tangible link between two worlds separated by over a century. As she traced the ancient symbols etched into its surface, Ella couldn't shake the feeling that she had stumbled upon something far greater than a mere historical curiosity. This was the beginning of a mystery that would span continents and centuries, drawing her into a web of deception, danger, and dark secrets that would forever change the course of her life. And so, with the golden glow of the amulet casting eerie shadows on the walls around her, Ella Winston embarked on a journey that would lead her from the bustling streets of 1970s London to the dusty trails of the California Gold Rush, and beyond. For she was about to discover that some treasures were never meant to be found... and some secrets were better left buried in the sands of time. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty historical novel set during the battle of the Bulge. Write in first person. The POV character for Chapter 1 is a young German officer trying to carry his wounded commander back to the deep defensive line after the bunker he was in was overrun. He should have a flashback to the moment it was overrun during the middle of the story. The story should not end well for either of them. ### ASSISTANT Snow crunched beneath my boots as I trudged through the frozen hell that was once the Ardennes Forest. The weight of my commander, Oberleutnant Schmidt, hung heavily over my left shoulder, his limp body a constant reminder of the futility of our situation. The cold bit at my exposed skin, numbing the pain in my right arm, which was slick with blood from the shrapnel wound I'd sustained during the initial American assault. The trees around us were skeletal silhouettes against the moonlit sky, their branches stripped bare by the relentless artillery barrage that had preceded the enemy's ground advance. The air reeked of cordite and burning pine, a bitter perfume that clung to my nostrils and choked my lungs with every ragged breath. "Komm weiter, Leutnant," Schmidt groaned, his voice barely audible over the distant rattle of machine gun fire. "We must reach the defensive line." I gritted my teeth and pushed forward, my vision blurring from exhaustion and the freezing tears that threatened to freeze on my cheeks. "Ja, Herr Oberleutnant," I managed to croak, my own voice foreign and weak. A sudden flash of light illuminated the night, followed by an ear-splitting roar. I instinctively dropped to the ground, dragging Schmidt with me as the earth shook beneath us. Shrapnel whizzed overhead, tearing through the air like a swarm of angry hornets. When the deafening cacophony subsided, I risked a glance upwards. The sky was alive with tracer fire, painting a macabre dance of death across the heavens. As the ringing in my ears began to fade, I could hear the unmistakable sound of approaching American tanks, their engines growling like hungry beasts closing in on their prey. My heart sank like a stone in my chest. We were cut off, surrounded by the enemy with no hope of reinforcement. A wave of despair threatened to consume me, but I shoved it aside. I was a soldier of the Reich, and I would not falter in my duty, not while my commander still breathed. With a Herculean effort, I hauled Schmidt back to his feet, ignoring the fresh wave of pain that shot through my arm. He leaned heavily against me, his breathing labored and uneven. Blood soaked his uniform, staining the pristine white of the makeshift bandage around his abdomen a sickening crimson. "Wir schaffen das," I lied, my voice trembling. "We'll make it." He looked at me then, his eyes glassy with pain and resignation. "Nein, Leutnant. Wir sind verloren." No, we're not lost, I wanted to argue, but the truth lay heavy in his words. We were lost, and we both knew it. As we stumbled through the desolate landscape, my mind drifted back to the moments before our world had been torn apart. --- The bunker had been our sanctuary, a concrete fortress buried deep within the heart of the forest. The constant drone of artillery fire had become a distant, almost comforting, hum as we huddled together, waiting for the inevitable American advance. I remember the tension in the air, thick and suffocating, as we listened to the static-filled radio reports. The Americans were closing in, their numbers seemingly endless. Our own lines were crumbling under the relentless onslaught, and soon, it would be our turn to face the enemy. Then, without warning, the ground shook violently, and the world exploded around us. The deafening roar of incoming shells drowned out the screams of my comrades as the bunker's walls crumbled like paper. I remember the searing pain as shrapnel tore through my arm, and the disorienting sensation of being thrown through the air like a ragdoll. When the dust finally settled, I found myself lying amidst the twisted wreckage, my ears ringing and vision blurred. Through the haze, I saw Schmidt, pinned beneath a fallen beam, his face contorted in agony. I crawled towards him, ignoring the screams of the dying and the acrid smell of burning flesh that permeated the air. "Sir!" I shouted, my voice hoarse from the smoke. "Sir, can you hear me?" He groaned in response, his eyes flickering open to meet mine. "Leutnant... retreat... order... retreat..." But it was too late for orders. The bunker was gone, and with it, any semblance of command or control. We were alone, two wounded soldiers in a sea of carnage. --- The present crashed back into focus as a burst of machine gun fire erupted nearby, the bullets whizzing past us like angry hornets. I ducked instinctively, pulling Schmidt down with me behind the scant cover of a fallen tree. "Wir sind verloren," he repeated, his voice barely a whisper now. I didn't have the heart to argue. Instead, I just nodded, my throat too tight to form words. We remained there, huddled together in the frozen darkness, as the sounds of battle drew ever closer. I could hear the Americans shouting in their guttural language, their voices filled with a savage exultation that chilled me to my very core. Suddenly, a flare illuminated the night, casting an eerie, otherworldly glow over the devastated landscape. For a brief moment, I saw them, their silhouettes advancing through the trees like ghostly specters. "Scheiße," I cursed, my hand instinctively reaching for my Luger. But it was useless. We were outnumbered, outgunned, and out of ammunition. As the first of the Americans emerged from the shadows, his face twisted into a savage grin, I knew our end had come. I closed my eyes, whispering a silent prayer for my family back home, and prepared to meet my maker. But the bullet that found me didn't come from the enemy. It came from behind, a betrayal as cold and merciless as the winter night itself. I felt a searing pain in my back, and then nothingness. When I opened my eyes again, I was floating above the scene, watching as the Americans swarmed over our lifeless bodies like vultures picking at carrion. Schmidt's voice echoed in my mind, a haunting refrain that would follow me into the abyss. "Wir sind verloren." We were lost. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me a short story about the fictional character of Anton Chigurh. It should be written in third person personal and feature internal monologues of his deranged world view. The time period is around 2 years before the events in Cormac McCarthy's 2005 novel 'No Country for Old Men'. It should be set in Mexico, be very dark, show Anton's still developing obsession with "chance" and feature a grenade. The story should start with him waiting in a bar for instructions on the "job" he is to perform. ### ASSISTANT Anton Chigurh sat in the dimly lit corner of the seedy Mexican cantina, his eyes scanning the room with the cold, detached gaze of a predator assessing its prey. The air was thick with the pungent aroma of stale cigarette smoke and cheap tequila, a fitting backdrop for the impending transaction. He tapped his index finger rhythmically against the worn wooden table, the metallic clink of his handcuffs the only sound disturbing the tense silence. His thoughts drifted inward, a swirling vortex of twisted philosophies and warped moralities. Life, he mused, was nothing more than a series of coin flips, random events governed by an indifferent universe. Fate, destiny, free will - mere illusions conjured by the feeble minds of men to give meaning to their insignificant existence. The only true constant was chance, the capricious mistress who held the reins of their pitiful lives. He reached into his worn leather jacket, his fingers brushing against the cool, smooth surface of the silver coin he always carried. He flipped it idly in his palm, heads or tails, life or death, it mattered not to him. The outcome was predetermined, as inevitable as the rising sun or the setting moon. It was simply a matter of revealing the truth that lay hidden beneath the veil of uncertainty. A grizzled man in a tattered sombrero shuffled into the bar, his eyes darting nervously from patron to patron. Anton's gaze locked onto him like a hawk zeroing in on its quarry. This was the messenger, the pawn in the grand cosmic game of chance that had led him here. The man slid a folded piece of paper across the table, muttered a barely audible "Buena suerte," and disappeared into the shadows as quickly as he had emerged. Anton unfolded the note, his eyes scanning the hastily scrawled instructions with clinical detachment. A name, a location, a time - the bare essentials of his next assignment. He memorized the details with the ease of a seasoned killer, then casually crumpled the paper and let it fall to the floor, consumed by the hungry flames of a nearby candle. As he rose from his seat, the bartender, a portly man with a sweat-stained wife-beater, eyed him warily. He knew the type - the quiet ones were always the most dangerous. But he said nothing, wisely choosing to mind his own business in the presence of such palpable menace. Outside, the oppressive heat of the Mexican desert night enveloped Anton like a suffocating shroud. He welcomed it, the suffocating embrace a stark reminder of the fleeting nature of life. He slid into the driver's seat of his nondescript sedan, the engine rumbling to life with a low growl. The hunt had begun. As he navigated the labyrinthine streets of the border town, his thoughts returned to the coin in his pocket. He could feel its weight, a constant reminder of the power he wielded. With a flick of his wrist, he could decide the fate of another human being, snuff out their feeble existence as easily as extinguishing a candle's flame. The thought brought a twisted smile to his lips, a chilling rictus that would send shivers down the spine of even the most hardened of men. The address was a nondescript motel on the outskirts of town, the kind of place where desperate souls went to disappear. Anton parked the car a block away, preferring the anonymity of the shadows. He retrieved the grenade from the glove compartment, its cold, metallic surface a comforting weight in his hand. It was a crude instrument, but effective in its simplicity. Chance, after all, did not discriminate between the swift blade and the indiscriminate blast. He approached the room with the stealth of a panther stalking its prey, the soles of his boots barely whispering against the cracked asphalt. The door was unlocked, a careless oversight that would prove to be the occupant's undoing. He slipped inside, the darkness enveloping him like a familiar cloak. The man lay sprawled on the stained mattress, a half-empty bottle of tequila clutched in one hand, a revolver in the other. He stirred at the sound of the intrusion, his bleary eyes struggling to focus in the gloom. It was too late. Anton's hand moved with the fluid grace of a deadly dance, the grenade arcing through the air before it found its mark. The man's eyes widened in terror, but there was no time for pleas or prayers. The world exploded into a cacophony of light and sound, the room disintegrating around them in a maelstrom of shrapnel and flame. As the smoke cleared, Anton stepped over the twisted wreckage that had once been a man, his expression unchanged. He retrieved the revolver from the debris, spinning the cylinder with a practiced flick of his wrist. Empty. The fool had been bluffing, counting on luck to save him from the inevitable. He pocketed the gun, a macabre souvenir of another life extinguished by the whims of chance. As he slipped back into the night, the coin in his pocket seemed to hum with a sinister energy, eager for the next dance with fate. In the distance, the mournful howl of a coyote echoed through the desert, a haunting serenade to the randomness of existence. Anton Chigurh, the embodiment of chaos and destruction, disappeared into the shadows, a spectral figure guided only by the capricious whims of chance. </details> Big thanks to @chargoddard for creating [Mergekit](https://github.com/arcee-ai/mergekit)!
AlignmentResearch/robust_llm_pythia-pm-160m-niki-ada-v4-s-0
AlignmentResearch
2024-05-28T21:11:48Z
105
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T21:11:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AlignmentResearch/robust_llm_pythia-pm-160m-niki-ada-v4-s-2
AlignmentResearch
2024-05-28T21:11:18Z
105
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T21:11:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AlignmentResearch/robust_llm_pythia-pm-70m-niki-ada-v4-s-2
AlignmentResearch
2024-05-28T21:07:47Z
105
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T21:07:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AlignmentResearch/robust_llm_pythia-pm-70m-niki-ada-v4-s-1
AlignmentResearch
2024-05-28T21:05:59Z
105
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T21:05:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RishieRish/chat_plmbr_2.2k_lora_model_8192_fp16
RishieRish
2024-05-28T21:02:31Z
5
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-28T20:54:03Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** RishieRish - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
LiteLLMs/aya-23-35B-GGUF
LiteLLMs
2024-05-28T20:58:56Z
61
1
transformers
[ "transformers", "gguf", "GGUF", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "el", "fa", "pl", "id", "cs", "he", "hi", "nl", "ro", "ru", "tr", "uk", "vi", "arxiv:2405.15032", "license:cc-by-nc-4.0", "region:us", "conversational" ]
null
2024-05-28T20:01:16Z
--- language: - en - fr - de - es - it - pt - ja - ko - zh - ar - el - fa - pl - id - cs - he - hi - nl - ro - ru - tr - uk - vi license: cc-by-nc-4.0 library_name: transformers tags: - GGUF inference: false quantized_by: andrijdavid --- # aya-23-35B-GGUF - Original model: [aya-23-35B](https://huggingface.co/CohereForAI/aya-23-35B) <!-- description start --> ## Description This repo contains GGUF format model files for [aya-23-35B](https://huggingface.co/CohereForAI/aya-23-35B). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration. * [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications​ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling. * [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration. * [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection. * [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use. * [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server. * [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents. <!-- README_GGUF.md-about-gguf end --> <!-- compatibility_gguf start --> ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: LiteLLMs/aya-23-35B-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download LiteLLMs/aya-23-35B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download LiteLLMs/aya-23-35B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install huggingface_hub[hf_transfer] ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/aya-23-35B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<PROMPT>", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer end --> <!-- original-model-card start --> # Original model card: aya-23-35B # Model Card for Aya-23-35B **Try Aya 23** You can try out Aya 23 (35B) before downloading the weights in our hosted Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23). ## Model Summary Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the recently released [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection). The result is a powerful multilingual large language model serving 23 languages. This model card corresponds to the 35-billion version of the Aya 23 model. We also released an 8-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-23-8B). We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese Developed by: [Cohere For AI](https://cohere.for.ai) and [Cohere](https://cohere.com/) - Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/) - License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy) - Model: aya-23-35B - Model Size: 35 billion parameters ### Usage Please install transformers from the source repository that includes the necessary changes for this model ```python # pip install 'git+https://github.com/huggingface/transformers.git' from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "CohereForAI/aya-23-35B" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) # Format message with the command-r-plus chat template messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` ### Example Notebook [This notebook](https://huggingface.co/CohereForAI/aya-23-35B/blob/main/Aya_23_notebook.ipynb) showcases a detailed use of Aya 23 (8B) including inference and fine-tuning with [QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes). ## Model Details **Input**: Models input text only. **Output**: Models generate text only. **Model Architecture**: Aya-23-35B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions. **Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese **Context length**: 8192 ### Evaluation <img src="benchmarks.png" alt="multilingual benchmarks" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/> <img src="winrates.png" alt="average win rates" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Please refer to the [Aya 23 technical report](https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23) for further details about the base model, data, instruction tuning, and evaluation. ### Model Card Contact For errors or additional questions about details in this model card, contact [email protected]. ### Terms of Use We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy). ### Try the model today You can try Aya 23 in the Cohere [playground](https://dashboard.cohere.com/playground/chat) here. You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23). ### Citation info ```bibtex @misc{aryabumi2024aya, title={Aya 23: Open Weight Releases to Further Multilingual Progress}, author={Viraat Aryabumi and John Dang and Dwarak Talupuru and Saurabh Dash and David Cairuz and Hangyu Lin and Bharat Venkitesh and Madeline Smith and Kelly Marchisio and Sebastian Ruder and Acyr Locatelli and Julia Kreutzer and Nick Frosst and Phil Blunsom and Marzieh Fadaee and Ahmet Üstün and Sara Hooker}, year={2024}, eprint={2405.15032}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- original-model-card end -->
AlignmentResearch/robust_llm_pythia-pm-14m-niki-ada-v4-s-1
AlignmentResearch
2024-05-28T20:58:45Z
105
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T20:58:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AlignmentResearch/robust_llm_pythia-imdb-1b-niki-ada-v4-s-0
AlignmentResearch
2024-05-28T20:54:24Z
105
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T20:52:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
eliem/model-qlora-4bit
eliem
2024-05-28T20:51:19Z
87
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-28T20:46:56Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** eliem - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
CherishC/Personal_Scores
CherishC
2024-05-28T20:49:23Z
109
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T06:37:14Z
--- license: apache-2.0 language: - en ---
siddharthsingh5010/distilbert-base-uncased-finetuned-emotions
siddharthsingh5010
2024-05-28T20:49:10Z
107
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T19:55:34Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotions results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.8945 - name: F1 type: f1 value: 0.8905629739592735 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotions This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.3513 - Accuracy: 0.8945 - F1: 0.8906 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 125 | 0.5315 | 0.8495 | 0.8357 | | 0.7527 | 2.0 | 250 | 0.3513 | 0.8945 | 0.8906 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
oyonay12/aqc_1
oyonay12
2024-05-28T20:47:20Z
113
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-20T21:31:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
andy006/lora_instruct_model_merged
andy006
2024-05-28T20:46:30Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T00:07:50Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** andy006 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
the-cramer-project/Whisper_STT
the-cramer-project
2024-05-28T20:44:26Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-28T20:44:26Z
--- license: apache-2.0 ---
shane062/whisper-tiny-finetuned
shane062
2024-05-28T20:36:43Z
123
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:audiofolder", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-27T12:40:36Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer datasets: - audiofolder metrics: - wer model-index: - name: whisper-tiny-finetuned results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: audiofolder type: audiofolder config: default split: test args: default metrics: - name: Wer type: wer value: 86.48648648648648 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-finetuned This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the audiofolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3909 - Wer Ortho: 83.7838 - Wer: 86.4865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:--------:|:----:|:---------------:|:---------:|:-------:| | 0.0001 | 166.6667 | 500 | 1.3909 | 83.7838 | 86.4865 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cpu - Datasets 2.19.1 - Tokenizers 0.19.1
AlignmentResearch/robust_llm_pythia-imdb-160m-niki-ada-v4-s-1
AlignmentResearch
2024-05-28T20:32:39Z
105
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T20:32:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nrbhole/invoices-donut-model-v4
nrbhole
2024-05-28T20:28:25Z
15
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "image-text-to-text", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-05-28T19:22:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
olesya2096/donut-base-sroie
olesya2096
2024-05-28T20:28:21Z
51
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "base_model:Akajackson/donut_rus", "base_model:finetune:Akajackson/donut_rus", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-05-28T20:20:57Z
--- base_model: Akajackson/donut_rus tags: - generated_from_trainer model-index: - name: donut-base-sroie results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-sroie This model is a fine-tuned version of [Akajackson/donut_rus](https://huggingface.co/Akajackson/donut_rus) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2106 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.3783 | 1.0 | 85 | 0.6386 | | 0.5799 | 2.0 | 170 | 0.3075 | | 0.2669 | 3.0 | 255 | 0.2106 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
AlignmentResearch/robust_llm_pythia-imdb-160m-niki-ada-v4-s-2
AlignmentResearch
2024-05-28T20:27:39Z
107
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T20:27:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
amasi/barthez-orange-ft
amasi
2024-05-28T20:27:12Z
109
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "generated_from_trainer", "base_model:moussaKam/barthez-orangesum-abstract", "base_model:finetune:moussaKam/barthez-orangesum-abstract", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-28T09:08:23Z
--- license: apache-2.0 base_model: moussaKam/barthez-orangesum-abstract tags: - generated_from_trainer metrics: - rouge model-index: - name: barthez-orange-ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # barthez-orange-ft This model is a fine-tuned version of [moussaKam/barthez-orangesum-abstract](https://huggingface.co/moussaKam/barthez-orangesum-abstract) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1689 - Rouge1: 0.6719 - Rouge2: 0.6536 - Rougel: 0.6719 - Rougelsum: 0.6722 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 31 | 4.6662 | 0.6719 | 0.6535 | 0.6718 | 0.6721 | 20.0 | | No log | 1.99 | 62 | 0.6939 | 0.6718 | 0.6535 | 0.6718 | 0.6721 | 20.0 | | No log | 2.99 | 93 | 0.2939 | 0.6718 | 0.6535 | 0.6718 | 0.6721 | 20.0 | | No log | 3.98 | 124 | 0.2089 | 0.6719 | 0.6535 | 0.6718 | 0.6721 | 20.0 | | No log | 4.98 | 155 | 0.1880 | 0.6719 | 0.6535 | 0.6718 | 0.6721 | 20.0 | | No log | 5.98 | 186 | 0.1795 | 0.6719 | 0.6535 | 0.6718 | 0.6721 | 20.0 | | No log | 6.97 | 217 | 0.1752 | 0.6719 | 0.6535 | 0.6718 | 0.6721 | 20.0 | | No log | 8.0 | 249 | 0.1732 | 0.6719 | 0.6535 | 0.6718 | 0.6721 | 20.0 | | No log | 9.0 | 280 | 0.1716 | 0.6719 | 0.6536 | 0.6719 | 0.6722 | 20.0 | | No log | 9.99 | 311 | 0.1707 | 0.6719 | 0.6536 | 0.6719 | 0.6722 | 20.0 | | No log | 10.99 | 342 | 0.1704 | 0.6719 | 0.6536 | 0.6719 | 0.6722 | 20.0 | | No log | 11.98 | 373 | 0.1696 | 0.6719 | 0.6536 | 0.6719 | 0.6722 | 20.0 | | No log | 12.98 | 404 | 0.1698 | 0.6719 | 0.6536 | 0.6719 | 0.6722 | 20.0 | | No log | 13.98 | 435 | 0.1695 | 0.6719 | 0.6536 | 0.6719 | 0.6722 | 20.0 | | No log | 14.97 | 466 | 0.1693 | 0.6719 | 0.6536 | 0.6719 | 0.6722 | 20.0 | | No log | 16.0 | 498 | 0.1691 | 0.6719 | 0.6536 | 0.6719 | 0.6722 | 20.0 | | 0.9743 | 17.0 | 529 | 0.1691 | 0.6719 | 0.6536 | 0.6719 | 0.6722 | 20.0 | | 0.9743 | 17.99 | 560 | 0.1690 | 0.6719 | 0.6536 | 0.6719 | 0.6722 | 20.0 | | 0.9743 | 18.99 | 591 | 0.1689 | 0.6719 | 0.6536 | 0.6719 | 0.6722 | 20.0 | | 0.9743 | 19.92 | 620 | 0.1689 | 0.6719 | 0.6536 | 0.6719 | 0.6722 | 20.0 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.13.3
AlignmentResearch/robust_llm_pythia-imdb-160m-niki-ada-v4-s-0
AlignmentResearch
2024-05-28T20:23:31Z
108
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T20:23:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ChenShov/q-FrozenLake-v1-4x4-noSlipperyc
ChenShov
2024-05-28T20:17:29Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-05-28T20:17:27Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlipperyc results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="ChenShov/q-FrozenLake-v1-4x4-noSlipperyc", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
amazingT/Reinforce-CartPole-v1
amazingT
2024-05-28T20:15:48Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-05-28T14:17:11Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
AlignmentResearch/robust_llm_pythia-imdb-31m-niki-ada-v4-s-0
AlignmentResearch
2024-05-28T20:14:08Z
105
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T20:14:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AlignmentResearch/robust_llm_pythia-imdb-31m-niki-ada-v4-s-1
AlignmentResearch
2024-05-28T20:12:44Z
105
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T20:12:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AlignmentResearch/robust_llm_pythia-imdb-31m-niki-ada-v4-s-2
AlignmentResearch
2024-05-28T20:12:43Z
105
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T20:12:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AlignmentResearch/robust_llm_pythia-imdb-70m-niki-ada-v4-s-1
AlignmentResearch
2024-05-28T20:08:41Z
106
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T20:08:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Lewdiculous/phencyclidine-8b-v1-GGUF-IQ-Imatrix
Lewdiculous
2024-05-28T20:07:54Z
27
2
null
[ "gguf", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix" ]
null
2024-05-28T13:34:20Z
--- license: apache-2.0 --- Experimental quants for [Model-Requests#34](https://huggingface.co/Lewdiculous/Model-Requests/discussions/34) - [Fizzarolli/phencyclidine-8b-v1](https://huggingface.co/Fizzarolli/phencyclidine-8b-v1). Requester decription: "experimental "storytelling" and roleplaying finetune of llama 3". ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/Tsm1zLCzfK1ds76mzpEDq.png)
HoaAn2003/Faster-RCNN
HoaAn2003
2024-05-28T20:06:58Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-21T04:07:33Z
--- license: apache-2.0 ---
alexbene/BianqueNet
alexbene
2024-05-28T20:06:10Z
0
0
null
[ "medical", "image-segmentation", "en", "region:us" ]
image-segmentation
2024-05-27T12:33:10Z
--- language: - en pipeline_tag: image-segmentation tags: - medical --- # BianqueNet BianqueNet is a segmentation model based on DeepLabv3+ with additional modules designed to improve the segmentation accuracy with IVD-related areas from T2W MR images. It was introduced in the paper [Deep learning-based high-accuracy quantitation for lumbar intervertebral disc degeneration from MRI](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8837609/) by Zheng et al. and first released in [this repository](https://github.com/no-saint-no-angel/BianqueNet). > Disclaimer: This model card was not written by the team that released the BianqueNet model. ## Intended uses & limitations You can use this particular checkpoint on spine sagittal T2-weighted MRI images. See the model hub to look for other image segmentation models that might interest you. ## BibTeX entry and citation info ```bibtex @article{zheng2022bianquenet, author = {Zheng, Hua-Dong and Sun, Yue-Li and Kong, De-Wei and Yin, Meng-Chen and Chen, Jiang and Lin, Yong-Peng and Ma, Xue-Feng and Wang, Hongshen and Yuan, Guang-Jie and Yao, Min and Cui, Xue-Jun and Tian, Ying-Zhong and Wang, Yong-Jun}, year = 2022, pages = 841, title = {Deep learning-based high-accuracy quantitation for lumbar intervertebral disc degeneration from MRI}, volume = 13, journal = {Nature Communications}, } ```
mihanismo/art-test-model
mihanismo
2024-05-28T20:05:16Z
0
0
null
[ "art", "text-to-image", "dataset:mihanismo/art-dataset-test", "region:us" ]
text-to-image
2024-05-27T23:07:03Z
--- datasets: - mihanismo/art-dataset-test pipeline_tag: text-to-image tags: - art --- SDXL was fine-tuned to transfer style. It was trained with Dreambooth. Images was generated with A1111 Stable Diffusion Web UI. Use 'aqrlstylesdxl style' to generate images in current style ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62d82e5cacea6137d8b0e65b/02mfYygJ0jEA0_cnrfU-u.png)
genislab/supervisor
genislab
2024-05-28T19:54:36Z
0
0
adapter-transformers
[ "adapter-transformers", "ai", "genislab", "supervisots", "generators", "summarization", "en", "dataset:nvidia/ChatQA-Training-Data", "dataset:m-a-p/Matrix", "doi:10.57967/hf/2207", "license:mit", "region:us" ]
summarization
2024-05-07T16:44:01Z
--- license: mit datasets: - nvidia/ChatQA-Training-Data - m-a-p/Matrix language: - en metrics: - accuracy - code_eval - bertscore library_name: adapter-transformers pipeline_tag: summarization tags: - ai - genislab - supervisots - generators ---
r1208/aya-qlora_hindi_headline_1000
r1208
2024-05-28T19:52:02Z
77
0
transformers
[ "transformers", "safetensors", "cohere", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-28T19:36:50Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mfebrizio/cra-window-rules
mfebrizio
2024-05-28T19:47:52Z
0
0
null
[ "license:mit", "region:us" ]
null
2024-05-28T19:41:32Z
--- title: Cra Window Rules emoji: 🌍 colorFrom: yellow colorTo: indigo sdk: docker pinned: false license: mit --- This is a templated Space for [Shiny for Python](https://shiny.rstudio.com/py/). To get started with a new app do the following: 1) Install Shiny with `pip install shiny` 2) Create a new app with `shiny create .` 3) Then run the app with `shiny run --reload` To learn more about this framework please see the [Documentation](https://shiny.rstudio.com/py/docs/overview.html).
Eya0910/finetuned-gemma4
Eya0910
2024-05-28T19:47:30Z
135
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-28T19:43:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-14028623
fine-tuned
2024-05-28T19:46:23Z
6
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-14028623", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T19:45:48Z
--- license: apache-2.0 datasets: - fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-14028623 - allenai/c4 language: - en - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-14028623', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
datek/google-gemma-2b-1716925355
datek
2024-05-28T19:45:26Z
136
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-28T19:42:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fine-tuned/SCIDOCS-512-192-gpt-4o-2024-05-13-3292683
fine-tuned
2024-05-28T19:44:32Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/SCIDOCS-512-192-gpt-4o-2024-05-13-3292683", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T19:44:01Z
--- license: apache-2.0 datasets: - fine-tuned/SCIDOCS-512-192-gpt-4o-2024-05-13-3292683 - allenai/c4 language: - en - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/SCIDOCS-512-192-gpt-4o-2024-05-13-3292683', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
BVRA/tf_efficientnet_b3.in1k_ft_fungitastic-m_384
BVRA
2024-05-28T19:43:48Z
8
0
DanishFungi
[ "DanishFungi", "pytorch", "image-classification", "ecology", "fungi", "FGVC", "license:cc-by-nc-4.0", "region:us" ]
image-classification
2024-05-28T19:43:42Z
--- tags: - image-classification - ecology - fungi - FGVC library_name: DanishFungi license: cc-by-nc-4.0 --- # Model card for BVRA/tf_efficientnet_b3.in1k_ft_df24m_384 ## Model Details - **Model Type:** Danish Fungi Classification - **Model Stats:** - Params (M): 11.0 - Image size: 384 x 384 - **Papers:** - **Original:** ?? - **Train Dataset:** DF24m --> https://sites.google.com/view/danish-fungi-dataset ## Model Usage ### Image Embeddings ```python import timm import torch import torchvision.transforms as T from PIL import Image from urllib.request import urlopen model = timm.create_model("hf-hub:BVRA/tf_efficientnet_b3.in1k_ft_df24m_384", pretrained=True) model = model.eval() train_transforms = T.Compose([T.Resize((384, 384)), T.ToTensor(), T.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) img = Image.open(PATH_TO_YOUR_IMAGE) output = model(train_transforms(img).unsqueeze(0)) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @InProceedings{Picek_2022_WACV, author = {Picek, Luk'a {s} and {S}ulc, Milan and Matas, Ji {r}{'\i} and Jeppesen, Thomas S. and Heilmann-Clausen, Jacob and L{e}ss{\o}e, Thomas and Fr{\o}slev, Tobias}, title = {Danish Fungi 2020 - Not Just Another Image Recognition Dataset}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2022}, pages = {1525-1535} } ``` ```bibtex @article{picek2022automatic, title={Automatic Fungi Recognition: Deep Learning Meets Mycology}, author={Picek, Luk{'a}{ {s}} and { {S}}ulc, Milan and Matas, Ji{ {r}}{'\i} and Heilmann-Clausen, Jacob and Jeppesen, Thomas S and Lind, Emil}, journal={Sensors}, volume={22}, number={2}, pages={633}, year={2022}, publisher={Multidisciplinary Digital Publishing Institute} } ```
CMU-AIR2/math-llama3-arithStep2K-MWP6K
CMU-AIR2
2024-05-28T19:40:02Z
0
0
peft
[ "peft", "safetensors", "llama", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:adapter:meta-llama/Meta-Llama-3-8B", "region:us" ]
null
2024-05-28T19:34:35Z
--- library_name: peft base_model: meta-llama/Meta-Llama-3-8B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF
legraphista
2024-05-28T19:39:47Z
418
1
gguf
[ "gguf", "quantized", "GGUF", "imatrix", "quantization", "imat", "static", "text-generation", "base_model:princeton-nlp/Llama-3-Instruct-8B-SimPO", "base_model:quantized:princeton-nlp/Llama-3-Instruct-8B-SimPO", "region:us" ]
text-generation
2024-05-28T18:27:08Z
--- base_model: princeton-nlp/Llama-3-Instruct-8B-SimPO inference: false library_name: gguf pipeline_tag: text-generation quantized_by: legraphista tags: - quantized - GGUF - imatrix - quantization - imat - imatrix - static --- # Llama-3-Instruct-8B-SimPO-IMat-GGUF _Llama.cpp imatrix quantization of princeton-nlp/Llama-3-Instruct-8B-SimPO_ Original Model: [princeton-nlp/Llama-3-Instruct-8B-SimPO](https://huggingface.co/princeton-nlp/Llama-3-Instruct-8B-SimPO) Original dtype: `BF16` (`bfloat16`) Quantized by: llama.cpp [b3023](https://github.com/ggerganov/llama.cpp/releases/tag/b3023) IMatrix dataset: [here](https://gist.githubusercontent.com/legraphista/d6d93f1a254bcfc58e0af3777eaec41e/raw/d380e7002cea4a51c33fffd47db851942754e7cc/imatrix.calibration.medium.raw) - [Llama-3-Instruct-8B-SimPO-IMat-GGUF](#llama-3-instruct-8b-simpo-imat-gguf) - [Files](#files) - [IMatrix](#imatrix) - [Common Quants](#common-quants) - [All Quants](#all-quants) - [Downloading using huggingface-cli](#downloading-using-huggingface-cli) - [Inference](#inference) - [Simple chat template](#simple-chat-template) - [Chat template with system prompt](#chat-template-with-system-prompt) - [Llama.cpp](#llama-cpp) - [FAQ](#faq) - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere) - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf) --- ## Files ### IMatrix Status: ✅ Available Link: [here](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/imatrix.dat) ### Common Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [Llama-3-Instruct-8B-SimPO.Q8_0.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q8_0.gguf) | Q8_0 | 8.54GB | ✅ Available | ⚪ Static | 📦 No | [Llama-3-Instruct-8B-SimPO.Q6_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q6_K.gguf) | Q6_K | 6.60GB | ✅ Available | ⚪ Static | 📦 No | [Llama-3-Instruct-8B-SimPO.Q4_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q4_K.gguf) | Q4_K | 4.92GB | ✅ Available | 🟢 IMatrix | 📦 No | [Llama-3-Instruct-8B-SimPO.Q3_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q3_K.gguf) | Q3_K | 4.02GB | ✅ Available | 🟢 IMatrix | 📦 No | [Llama-3-Instruct-8B-SimPO.Q2_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q2_K.gguf) | Q2_K | 3.18GB | ✅ Available | 🟢 IMatrix | 📦 No ### All Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [Llama-3-Instruct-8B-SimPO.BF16.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.BF16.gguf) | BF16 | 16.07GB | ✅ Available | ⚪ Static | 📦 No | [Llama-3-Instruct-8B-SimPO.FP16.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.FP16.gguf) | F16 | 16.07GB | ✅ Available | ⚪ Static | 📦 No | [Llama-3-Instruct-8B-SimPO.Q8_0.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q8_0.gguf) | Q8_0 | 8.54GB | ✅ Available | ⚪ Static | 📦 No | [Llama-3-Instruct-8B-SimPO.Q6_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q6_K.gguf) | Q6_K | 6.60GB | ✅ Available | ⚪ Static | 📦 No | [Llama-3-Instruct-8B-SimPO.Q5_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q5_K.gguf) | Q5_K | 5.73GB | ✅ Available | ⚪ Static | 📦 No | [Llama-3-Instruct-8B-SimPO.Q5_K_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q5_K_S.gguf) | Q5_K_S | 5.60GB | ✅ Available | ⚪ Static | 📦 No | [Llama-3-Instruct-8B-SimPO.Q4_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q4_K.gguf) | Q4_K | 4.92GB | ✅ Available | 🟢 IMatrix | 📦 No | [Llama-3-Instruct-8B-SimPO.Q4_K_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q4_K_S.gguf) | Q4_K_S | 4.69GB | ✅ Available | 🟢 IMatrix | 📦 No | [Llama-3-Instruct-8B-SimPO.IQ4_NL.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ4_NL.gguf) | IQ4_NL | 4.68GB | ✅ Available | 🟢 IMatrix | 📦 No | [Llama-3-Instruct-8B-SimPO.IQ4_XS.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ4_XS.gguf) | IQ4_XS | 4.45GB | ✅ Available | 🟢 IMatrix | 📦 No | [Llama-3-Instruct-8B-SimPO.Q3_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q3_K.gguf) | Q3_K | 4.02GB | ✅ Available | 🟢 IMatrix | 📦 No | [Llama-3-Instruct-8B-SimPO.Q3_K_L.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q3_K_L.gguf) | Q3_K_L | 4.32GB | ✅ Available | 🟢 IMatrix | 📦 No | [Llama-3-Instruct-8B-SimPO.Q3_K_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q3_K_S.gguf) | Q3_K_S | 3.66GB | ✅ Available | 🟢 IMatrix | 📦 No | [Llama-3-Instruct-8B-SimPO.IQ3_M.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ3_M.gguf) | IQ3_M | 3.78GB | ✅ Available | 🟢 IMatrix | 📦 No | [Llama-3-Instruct-8B-SimPO.IQ3_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ3_S.gguf) | IQ3_S | 3.68GB | ✅ Available | 🟢 IMatrix | 📦 No | [Llama-3-Instruct-8B-SimPO.IQ3_XS.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ3_XS.gguf) | IQ3_XS | 3.52GB | ✅ Available | 🟢 IMatrix | 📦 No | [Llama-3-Instruct-8B-SimPO.IQ3_XXS.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | ✅ Available | 🟢 IMatrix | 📦 No | [Llama-3-Instruct-8B-SimPO.Q2_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q2_K.gguf) | Q2_K | 3.18GB | ✅ Available | 🟢 IMatrix | 📦 No | [Llama-3-Instruct-8B-SimPO.Q2_K_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q2_K_S.gguf) | Q2_K_S | 2.99GB | ✅ Available | 🟢 IMatrix | 📦 No | [Llama-3-Instruct-8B-SimPO.IQ2_M.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ2_M.gguf) | IQ2_M | 2.95GB | ✅ Available | 🟢 IMatrix | 📦 No | [Llama-3-Instruct-8B-SimPO.IQ2_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ2_S.gguf) | IQ2_S | 2.76GB | ✅ Available | 🟢 IMatrix | 📦 No | [Llama-3-Instruct-8B-SimPO.IQ2_XS.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ2_XS.gguf) | IQ2_XS | 2.61GB | ✅ Available | 🟢 IMatrix | 📦 No | [Llama-3-Instruct-8B-SimPO.IQ2_XXS.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ2_XXS.gguf) | IQ2_XXS | 2.40GB | ✅ Available | 🟢 IMatrix | 📦 No | [Llama-3-Instruct-8B-SimPO.IQ1_M.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ1_M.gguf) | IQ1_M | 2.16GB | ✅ Available | 🟢 IMatrix | 📦 No | [Llama-3-Instruct-8B-SimPO.IQ1_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ1_S.gguf) | IQ1_S | 2.02GB | ✅ Available | 🟢 IMatrix | 📦 No ## Downloading using huggingface-cli If you do not have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Download the specific file you want: ``` huggingface-cli download legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF --include "Llama-3-Instruct-8B-SimPO.BF16.gguf" --local-dir ./ ``` If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF --include "Llama-3-Instruct-8B-SimPO.BF16/*" --local-dir ./ # see FAQ for merging GGUF's ``` --- ## Inference ### Simple chat template ``` <|im_start|>user Can you provide ways to eat combinations of bananas and dragonfruits?<|im_end|> <|im_start|>assistant Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey.<|im_end|> <|im_start|>user What about solving an 2x + 3 = 7 equation?<|im_end|> ``` ### Chat template with system prompt ``` <|im_start|>system You are a helpful AI.<|im_end|> <|im_start|>user Can you provide ways to eat combinations of bananas and dragonfruits?<|im_end|> <|im_start|>assistant Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey.<|im_end|> <|im_start|>user What about solving an 2x + 3 = 7 equation?<|im_end|> ``` ### Llama.cpp ``` llama.cpp/main -m Llama-3-Instruct-8B-SimPO.BF16.gguf --color -i -p "prompt here (according to the chat template)" ``` --- ## FAQ ### Why is the IMatrix not applied everywhere? According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results). ### How do I merge a split GGUF? 1. Make sure you have `gguf-split` available - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases - Download the appropriate zip for your system from the latest release - Unzip the archive and you should be able to find `gguf-split` 2. Locate your GGUF chunks folder (ex: `Llama-3-Instruct-8B-SimPO.BF16`) 3. Run `gguf-split --merge Llama-3-Instruct-8B-SimPO.BF16/Llama-3-Instruct-8B-SimPO.BF16-00001-of-XXXXX.gguf Llama-3-Instruct-8B-SimPO.BF16.gguf` - Make sure to point `gguf-split` to the first chunk of the split. --- Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!
CMU-AIR2/math-llama3-arithStep2K-MWP4K
CMU-AIR2
2024-05-28T19:39:30Z
0
0
peft
[ "peft", "safetensors", "llama", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:adapter:meta-llama/Meta-Llama-3-8B", "region:us" ]
null
2024-05-28T19:34:02Z
--- library_name: peft base_model: meta-llama/Meta-Llama-3-8B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
Lakoc/uni500_cz
Lakoc
2024-05-28T19:39:00Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-28T19:38:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
CMU-AIR2/math-llama3-arithStep2K-MWP2K
CMU-AIR2
2024-05-28T19:38:28Z
0
0
peft
[ "peft", "safetensors", "llama", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:adapter:meta-llama/Meta-Llama-3-8B", "region:us" ]
null
2024-05-28T19:33:55Z
--- library_name: peft base_model: meta-llama/Meta-Llama-3-8B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
th041/vit-weld-classify
th041
2024-05-28T19:37:54Z
193
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-28T19:15:19Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - image-classification - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-weld-classify results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.6894977168949772 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-weld-classify This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7966 - Accuracy: 0.6895 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 18 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.8686 | 0.8130 | 100 | 0.7966 | 0.6895 | | 0.6935 | 1.6260 | 200 | 1.2217 | 0.5068 | | 0.4225 | 2.4390 | 300 | 0.9592 | 0.6210 | | 0.2586 | 3.2520 | 400 | 1.3123 | 0.5936 | | 0.237 | 4.0650 | 500 | 0.8075 | 0.6986 | | 0.2658 | 4.8780 | 600 | 1.0878 | 0.6210 | | 0.1904 | 5.6911 | 700 | 1.1048 | 0.7169 | | 0.0964 | 6.5041 | 800 | 1.3602 | 0.6849 | | 0.0474 | 7.3171 | 900 | 1.1331 | 0.7671 | | 0.1179 | 8.1301 | 1000 | 1.1228 | 0.7306 | | 0.0447 | 8.9431 | 1100 | 1.2609 | 0.7397 | | 0.0043 | 9.7561 | 1200 | 1.1746 | 0.7763 | | 0.1059 | 10.5691 | 1300 | 1.1867 | 0.7763 | | 0.0026 | 11.3821 | 1400 | 1.2890 | 0.7534 | | 0.0039 | 12.1951 | 1500 | 1.3283 | 0.7580 | | 0.002 | 13.0081 | 1600 | 1.1871 | 0.7671 | | 0.0019 | 13.8211 | 1700 | 1.1643 | 0.7900 | | 0.0264 | 14.6341 | 1800 | 1.1537 | 0.7900 | | 0.0015 | 15.4472 | 1900 | 1.1821 | 0.7945 | | 0.0015 | 16.2602 | 2000 | 1.1962 | 0.7900 | | 0.0014 | 17.0732 | 2100 | 1.2036 | 0.7900 | | 0.0014 | 17.8862 | 2200 | 1.2067 | 0.7900 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
canho/koalpaca-5.8b-3epochs-total-data
canho
2024-05-28T19:33:38Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-28T19:33:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
emmapog/ProjectElrondv1
emmapog
2024-05-28T19:33:10Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "region:us" ]
null
2024-05-28T19:29:45Z
--- library_name: peft base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
CMU-AIR2/math-llama3-instruct-MWP6K
CMU-AIR2
2024-05-28T19:32:23Z
1
0
peft
[ "peft", "safetensors", "llama", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "region:us" ]
null
2024-05-28T19:26:06Z
--- library_name: peft base_model: meta-llama/Meta-Llama-3-8B-Instruct --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
quinnb/whisper-test-custom
quinnb
2024-05-28T19:29:36Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "hi", "dataset:mozilla-foundation/common_voice_17_0", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-28T19:29:21Z
--- language: - hi license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_17_0 model-index: - name: Whisper Large v3 Trained on Hindi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large v3 Trained on Hindi This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Custom Hindi dataset dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 100 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
atepeq/Mistral-7B-Instruct-v0.2_musk_r32
atepeq
2024-05-28T19:28:19Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-28T19:28:10Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit --- # Uploaded model - **Developed by:** atepeq - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
lsmille/lora_evo_ta_all_layers_11
lsmille
2024-05-28T19:28:16Z
3
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:togethercomputer/evo-1-8k-base", "base_model:adapter:togethercomputer/evo-1-8k-base", "license:apache-2.0", "region:us" ]
null
2024-05-28T08:32:14Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: togethercomputer/evo-1-8k-base model-index: - name: lora_evo_ta_all_layers_11 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lora_evo_ta_all_layers_11 This model is a fine-tuned version of [togethercomputer/evo-1-8k-base](https://huggingface.co/togethercomputer/evo-1-8k-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9276 ## Model description *Good model* with A100 lora_alpha = 256 <---- same as 9 lora_dropout = 0.05 lora_r = 128 <---- same as 9 epochs = 3 learning rate = 3e-4 warmup_steps=10 gradient_accumulation_steps = 8 train_batch = 2 <--- larger batch only with A100 eval_batch = 2 <--- ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 10 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.0064 | 1.0 | 133 | 2.9583 | | 2.8336 | 2.0 | 266 | 2.9207 | | 2.6903 | 3.0 | 399 | 2.9276 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
lsmille/lora_evo_ta_all_layers_10
lsmille
2024-05-28T19:27:04Z
1
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:togethercomputer/evo-1-8k-base", "base_model:adapter:togethercomputer/evo-1-8k-base", "license:apache-2.0", "region:us" ]
null
2024-05-28T08:20:50Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: togethercomputer/evo-1-8k-base model-index: - name: lora_evo_ta_all_layers_10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lora_evo_ta_all_layers_10 This model is a fine-tuned version of [togethercomputer/evo-1-8k-base](https://huggingface.co/togethercomputer/evo-1-8k-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9544 ## Model description *Good model* with A100 lora_alpha = 512 <---- 2x than that in model 9 (total 4x) lora_dropout = 0.05 lora_r = 128 <---- same as model 9 epochs = 3 learning rate = 3e-4 warmup_steps=10 gradient_accumulation_steps = 8 train_batch = 1 eval_batch = 1 ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 10 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.0259 | 1.0 | 266 | 2.9633 | | 2.8429 | 2.0 | 532 | 2.9493 | | 2.688 | 3.0 | 798 | 2.9544 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
statking/paligemma-vqa
statking
2024-05-28T19:26:51Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "paligemma", "generated_from_trainer", "dataset:vq_av2", "base_model:google/paligemma-3b-pt-224", "base_model:adapter:google/paligemma-3b-pt-224", "license:gemma", "region:us" ]
null
2024-05-24T04:12:51Z
--- license: gemma library_name: peft tags: - generated_from_trainer base_model: google/paligemma-3b-pt-224 datasets: - vq_av2 model-index: - name: paligemma-vqa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # paligemma-vqa This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on the vq_av2 dataset. It achieves the following results on the evaluation set: - Loss: 0.5071 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5618 | 0.5886 | 1000 | 0.5531 | | 0.5268 | 1.1772 | 2000 | 0.5335 | | 0.5099 | 1.7657 | 3000 | 0.5071 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.2.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
ostromb/oho-dpo
ostromb
2024-05-28T19:26:40Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/tinyllama", "base_model:adapter:unsloth/tinyllama", "region:us" ]
null
2024-05-28T19:26:35Z
--- library_name: peft base_model: unsloth/tinyllama --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
myrulezzzz/tinyllama_tele_customer_support
myrulezzzz
2024-05-28T19:26:17Z
11
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/tinyllama-bnb-4bit", "base_model:quantized:unsloth/tinyllama-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-28T19:25:54Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: unsloth/tinyllama-bnb-4bit --- # Uploaded model - **Developed by:** myrulezzzz - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
lsmille/lora_evo_ta_all_layers_9
lsmille
2024-05-28T19:25:25Z
2
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:togethercomputer/evo-1-8k-base", "base_model:adapter:togethercomputer/evo-1-8k-base", "license:apache-2.0", "region:us" ]
null
2024-05-28T08:10:58Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: togethercomputer/evo-1-8k-base model-index: - name: lora_evo_ta_all_layers_9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lora_evo_ta_all_layers_9 This model is a fine-tuned version of [togethercomputer/evo-1-8k-base](https://huggingface.co/togethercomputer/evo-1-8k-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9312 ## Model description *Good model* with A100 lora_alpha = 256 <---- only with A100 lora_dropout = 0.05 lora_r = 128 <---- only with A100 epochs = 3 learning rate = 3e-4 warmup_steps=10 gradient_accumulation_steps = 8 train_batch = 1 eval_batch = 1 ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 10 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.0125 | 1.0 | 266 | 2.9774 | | 2.8409 | 2.0 | 532 | 2.9347 | | 2.6991 | 3.0 | 798 | 2.9312 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
lsmille/lora_evo_ta_all_layers_8
lsmille
2024-05-28T19:23:12Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:togethercomputer/evo-1-8k-base", "base_model:adapter:togethercomputer/evo-1-8k-base", "license:apache-2.0", "region:us" ]
null
2024-05-28T07:28:18Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: togethercomputer/evo-1-8k-base model-index: - name: lora_evo_ta_all_layers_8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lora_evo_ta_all_layers_8 This model is a fine-tuned version of [togethercomputer/evo-1-8k-base](https://huggingface.co/togethercomputer/evo-1-8k-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9102 ## Model description *BEST MODEL* lora_alpha = 32 lora_dropout = 0.05 lora_r = 16 epochs = 3 learning rate = 3e-4 warmup_steps=0.5 gradient_accumulation_steps = 1 <---- virtual batch of 1 (update every sample) train_batch = 1 eval_batch = 1 ## Intended uses & limitations More information needed ## Training and evaluation data in files ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.5 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.0004 | 1.0 | 266 | 2.9540 | | 2.8175 | 2.0 | 532 | 2.9155 | | 2.6755 | 3.0 | 798 | 2.9102 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Lakoc/uni200_cz
Lakoc
2024-05-28T19:20:52Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-28T19:20:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lsmille/lora_evo_ta_all_layers_6
lsmille
2024-05-28T19:20:45Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:togethercomputer/evo-1-8k-base", "base_model:adapter:togethercomputer/evo-1-8k-base", "license:apache-2.0", "region:us" ]
null
2024-05-28T06:37:46Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: togethercomputer/evo-1-8k-base model-index: - name: lora_evo_ta_all_layers_6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lora_evo_ta_all_layers_6 This model is a fine-tuned version of [togethercomputer/evo-1-8k-base](https://huggingface.co/togethercomputer/evo-1-8k-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.1104 ## Model description lora_alpha = 32 lora_dropout = 0.05 lora_r = 16 epochs = 3 learning rate = 3e-4 warmup_steps=0.5 gradient_accumulation_steps = 8 train_batch = 1 eval_batch = 1 Training of only last 40 linear modules [120:160] instead of [0:160] <------ This changes the # of trainable params to 8,914,944 ## Intended uses & limitations More information needed ## Training and evaluation data look at files ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.5 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.1331 | 0.9925 | 33 | 3.1104 | | 3.1331 | 1.9850 | 66 | 3.1104 | | 3.0635 | 2.9774 | 99 | 3.1104 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
CMU-AIR2/math-llama3-arith2k-mwp8k
CMU-AIR2
2024-05-28T19:20:13Z
0
0
peft
[ "peft", "safetensors", "llama", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:adapter:meta-llama/Meta-Llama-3-8B", "region:us" ]
null
2024-05-28T03:46:20Z
--- library_name: peft base_model: meta-llama/Meta-Llama-3-8B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
CMU-AIR2/math-llama3-arith2k-mwp10k
CMU-AIR2
2024-05-28T19:20:05Z
4
0
peft
[ "peft", "safetensors", "llama", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:adapter:meta-llama/Meta-Llama-3-8B", "region:us" ]
null
2024-05-28T03:46:27Z
--- library_name: peft base_model: meta-llama/Meta-Llama-3-8B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
lsmille/lora_evo_ta_all_layers_1
lsmille
2024-05-28T19:18:32Z
5
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:togethercomputer/evo-1-8k-base", "base_model:adapter:togethercomputer/evo-1-8k-base", "license:apache-2.0", "region:us" ]
null
2024-05-28T02:21:14Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: togethercomputer/evo-1-8k-base model-index: - name: lora_evo_ta_all_layers_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lora_evo_ta_all_layers_1 This model is a fine-tuned version of [togethercomputer/evo-1-8k-base](https://huggingface.co/togethercomputer/evo-1-8k-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9448 ## Model description lora_alpha = 32 lora_dropout = 0.05 lora_r = 16 epochs = 3 learning rate = 3e-4 warmup_steps=0.5 gradient_accumulation_steps = 8 train_batch = 1 eval_batch = 1 ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.5 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.084 | 0.9925 | 33 | 2.9871 | | 2.9303 | 1.9850 | 66 | 2.9553 | | 2.7579 | 2.9774 | 99 | 2.9448 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
ahcene-ikram/vizwiz_model
ahcene-ikram
2024-05-28T19:14:31Z
63
0
transformers
[ "transformers", "safetensors", "vilt", "visual-question-answering", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
visual-question-answering
2024-05-28T18:12:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LordY54/recophi3_f16
LordY54
2024-05-28T19:11:25Z
78
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-10T05:50:09Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit --- widget: - text: "Instruction:\nYou are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.. Remember that you are a friendly virtual nutritionist, you're talking to patient directly so let the patient know everything he need in your recomendation (extend it as much as you need). you're not going to be asked questions back\n\n### Input:\nCase: 2, Nombre: Ana, Edad: 42, Estilo de vida: Moderado, trabajo: Docente universitaria. Información antropometrica antropometría [altura : 1.68, peso: 75, IMC:25.8, circunferencia_cintura: 100]. Indicadores bioquimicos: [glucosa: normal, colesterol: medio], Información dieta (ingesta): [preferencias: [comida rápida]], comidas comunes: [donas, hamburguesa con papas fritas], frutas y verduras: bajo, fibra: bajo, grasas_saturadas: alto, azucares: alto]. Indicadores sociales: [estado_civil: Casada, ingreso: Bajo, accesoa a alimentos saludables: True]. Información sobre actividad física: [actividad_fisica: Media, actividades_diarias:[Ejercicio en casa, pasear con el perro]]." example_title: "Sentiment analysis" # Uploaded model - **Developed by:** LordY54 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
lsmille/lora_evo_ta_all_layers_4
lsmille
2024-05-28T19:10:45Z
1
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:togethercomputer/evo-1-8k-base", "base_model:adapter:togethercomputer/evo-1-8k-base", "license:apache-2.0", "region:us" ]
null
2024-05-28T05:43:16Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: togethercomputer/evo-1-8k-base model-index: - name: lora_evo_ta_all_layers_4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lora_evo_ta_all_layers_4 This model is a fine-tuned version of [togethercomputer/evo-1-8k-base](https://huggingface.co/togethercomputer/evo-1-8k-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2625 ## Model description lora_alpha = 32 lora_dropout = 0.05 lora_r = 16 epochs = 3 learning rate = 3e-3 <--------- (10x larger) warmup_steps=0.5 gradient_accumulation_steps = 8 train_batch = 1 eval_batch = 1 ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.003 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.5 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.472 | 0.9925 | 33 | 3.2926 | | 3.3114 | 1.9850 | 66 | 3.2669 | | 3.2254 | 2.9774 | 99 | 3.2625 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
lsmille/lora_evo_ta_all_layers_2
lsmille
2024-05-28T19:10:07Z
6
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:togethercomputer/evo-1-8k-base", "base_model:adapter:togethercomputer/evo-1-8k-base", "license:apache-2.0", "region:us" ]
null
2024-05-28T04:19:32Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: togethercomputer/evo-1-8k-base model-index: - name: lora_evo_ta_all_layers_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lora_evo_ta_all_layers_2 This model is a fine-tuned version of [togethercomputer/evo-1-8k-base](https://huggingface.co/togethercomputer/evo-1-8k-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.1660 ## Model description lora_alpha = 32 lora_dropout = 0.05 lora_r = 16 epochs = 9 <--------------- learning rate = 3e-4 warmup_steps=0.5 gradient_accumulation_steps = 8 train_batch = 1 eval_batch = 1 ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.5 - num_epochs: 9 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.0681 | 0.9925 | 33 | 2.9815 | | 2.9165 | 1.9850 | 66 | 2.9530 | | 2.8091 | 2.9774 | 99 | 2.9446 | | 2.6361 | 4.0 | 133 | 2.9406 | | 2.6312 | 4.9925 | 166 | 2.9409 | | 2.57 | 5.9850 | 199 | 2.9978 | | 2.5215 | 6.9774 | 232 | 3.0450 | | 2.4107 | 8.0 | 266 | 3.0763 | | 2.4272 | 8.9323 | 297 | 3.1660 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
amiguel/lightining_studio
amiguel
2024-05-28T19:01:17Z
0
0
adapter-transformers
[ "adapter-transformers", "medical", "text-classification", "dataset:HuggingFaceFW/fineweb", "license:apache-2.0", "region:us" ]
text-classification
2024-05-22T06:09:31Z
--- license: apache-2.0 datasets: - HuggingFaceFW/fineweb metrics: - accuracy library_name: adapter-transformers pipeline_tag: text-classification tags: - medical ---
dtorber/BioNLP-conditional-tokens-decoder-eLife
dtorber
2024-05-28T18:59:39Z
97
0
transformers
[ "transformers", "safetensors", "led", "text2text-generation", "summarization", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2024-05-28T10:54:52Z
--- tags: - summarization - generated_from_trainer model-index: - name: BioNLP-conditional-tokens-decoder-eLife results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BioNLP-conditional-tokens-decoder-eLife This model was trained from scratch on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.3739167643078955e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 1.13.1+cu117 - Datasets 2.16.1 - Tokenizers 0.15.2
phongtintruong/misjava-api-052924
phongtintruong
2024-05-28T18:56:47Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-28T18:26:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
luciorramos/llm_tcc_sp90_ep90_ds1000
luciorramos
2024-05-28T18:56:46Z
2
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-28T18:49:35Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** luciorramos - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-26543668
fine-tuned
2024-05-28T18:56:30Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-26543668", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T18:55:59Z
--- license: apache-2.0 datasets: - fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-26543668 - allenai/c4 language: - en - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-26543668', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
fine-tuned/SCIDOCS-512-192-gpt-4o-2024-05-13-24541174
fine-tuned
2024-05-28T18:55:55Z
6
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/SCIDOCS-512-192-gpt-4o-2024-05-13-24541174", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T18:55:20Z
--- license: apache-2.0 datasets: - fine-tuned/SCIDOCS-512-192-gpt-4o-2024-05-13-24541174 - allenai/c4 language: - en - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/SCIDOCS-512-192-gpt-4o-2024-05-13-24541174', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-34917964
fine-tuned
2024-05-28T18:55:41Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-34917964", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T18:55:04Z
--- license: apache-2.0 datasets: - fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-34917964 - allenai/c4 language: - en - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-34917964', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
mago18/donut-demo
mago18
2024-05-28T18:55:34Z
49
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "image-text-to-text", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-05-28T18:55:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-44219785
fine-tuned
2024-05-28T18:55:32Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-44219785", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T18:54:53Z
--- license: apache-2.0 datasets: - fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-44219785 - allenai/c4 language: - en - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-44219785', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
th041/vit-weldclassify
th041
2024-05-28T18:55:25Z
194
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-28T18:40:30Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - image-classification - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-weldclassify results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.817351598173516 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-weldclassify This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0639 - Accuracy: 0.8174 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 18 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.8311 | 0.8130 | 100 | 0.9623 | 0.4886 | | 0.6016 | 1.6260 | 200 | 0.5911 | 0.7215 | | 0.2602 | 2.4390 | 300 | 1.0585 | 0.6393 | | 0.1643 | 3.2520 | 400 | 0.9470 | 0.7169 | | 0.3754 | 4.0650 | 500 | 0.6054 | 0.8082 | | 0.1446 | 4.8780 | 600 | 0.6845 | 0.7854 | | 0.138 | 5.6911 | 700 | 0.9011 | 0.7534 | | 0.033 | 6.5041 | 800 | 0.8366 | 0.8128 | | 0.0538 | 7.3171 | 900 | 0.9102 | 0.7854 | | 0.0144 | 8.1301 | 1000 | 0.8510 | 0.8128 | | 0.0459 | 8.9431 | 1100 | 0.8610 | 0.8219 | | 0.0022 | 9.7561 | 1200 | 0.9398 | 0.8082 | | 0.0019 | 10.5691 | 1300 | 0.8714 | 0.8356 | | 0.0015 | 11.3821 | 1400 | 1.0001 | 0.8128 | | 0.0013 | 12.1951 | 1500 | 0.9926 | 0.8219 | | 0.0012 | 13.0081 | 1600 | 1.0175 | 0.8219 | | 0.0011 | 13.8211 | 1700 | 1.0323 | 0.8219 | | 0.001 | 14.6341 | 1800 | 1.0453 | 0.8174 | | 0.0009 | 15.4472 | 1900 | 1.0518 | 0.8174 | | 0.0009 | 16.2602 | 2000 | 1.0585 | 0.8174 | | 0.0009 | 17.0732 | 2100 | 1.0623 | 0.8174 | | 0.0009 | 17.8862 | 2200 | 1.0639 | 0.8174 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-34914559
fine-tuned
2024-05-28T18:54:57Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-34914559", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T18:54:18Z
--- license: apache-2.0 datasets: - fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-34914559 - allenai/c4 language: - en - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-34914559', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241
fine-tuned
2024-05-28T18:54:49Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T18:54:18Z
--- license: apache-2.0 datasets: - fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241 - allenai/c4 language: - en - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-89836585
fine-tuned
2024-05-28T18:54:24Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-89836585", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T18:53:55Z
--- license: apache-2.0 datasets: - fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-89836585 - allenai/c4 language: - en - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-89836585', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-76823162
fine-tuned
2024-05-28T18:54:20Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-76823162", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T18:53:49Z
--- license: apache-2.0 datasets: - fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-76823162 - allenai/c4 language: - en - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-76823162', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```