modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-28 12:28:24
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
500 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-28 12:27:53
card
stringlengths
11
1.01M
yashika-03/prompt-injection-bert
yashika-03
2024-05-23T06:18:37Z
131
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-23T06:18:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
eeeyounglee/EEVE-10.8B-Dense-Finetune-1
eeeyounglee
2024-05-23T06:12:47Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "llama", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-05-23T06:10:18Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # eeeyounglee/EEVE-10.8B-Dense-Finetune-1 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('eeeyounglee/EEVE-10.8B-Dense-Finetune-1') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=eeeyounglee/EEVE-10.8B-Dense-Finetune-1) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 900 with parameters: ``` {'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 1000, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 90, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: LlamaModel (1): Pooling({'word_embedding_dimension': 4096, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Dense({'in_features': 4096, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
MainakMaitra/mistral_7b_instruct_finetuned_multi_intent
MainakMaitra
2024-05-23T06:09:49Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-23T06:09:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hgnoi/hLaQB2g1TvhKlzgV
hgnoi
2024-05-23T06:09:17Z
129
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T06:07:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
huudan12345/distilGPT2_en_xsum
huudan12345
2024-05-23T05:59:25Z
145
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T05:59:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Keerthanah2002/rmj
Keerthanah2002
2024-05-23T05:58:17Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-05-23T05:51:30Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### rmj Dreambooth model trained by Keerthanah2002 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: ![0](https://huggingface.co/Keerthanah2002/rmj/resolve/main/sample_images/dog(12).jpg) ![1](https://huggingface.co/Keerthanah2002/rmj/resolve/main/sample_images/dog(2).jpg) ![2](https://huggingface.co/Keerthanah2002/rmj/resolve/main/sample_images/dog(3).jpg) ![3](https://huggingface.co/Keerthanah2002/rmj/resolve/main/sample_images/dog(9).jpg) ![4](https://huggingface.co/Keerthanah2002/rmj/resolve/main/sample_images/dog(19).jpeg) ![5](https://huggingface.co/Keerthanah2002/rmj/resolve/main/sample_images/dog(13).jpg) ![6](https://huggingface.co/Keerthanah2002/rmj/resolve/main/sample_images/dog(16).jpeg) ![7](https://huggingface.co/Keerthanah2002/rmj/resolve/main/sample_images/dog(5).jpg) ![8](https://huggingface.co/Keerthanah2002/rmj/resolve/main/sample_images/dog(14).jpg) ![9](https://huggingface.co/Keerthanah2002/rmj/resolve/main/sample_images/dog(18).jpeg) ![10](https://huggingface.co/Keerthanah2002/rmj/resolve/main/sample_images/dog(0).jpg) ![11](https://huggingface.co/Keerthanah2002/rmj/resolve/main/sample_images/dog(17).jpeg) ![12](https://huggingface.co/Keerthanah2002/rmj/resolve/main/sample_images/dog(6).jpg) ![13](https://huggingface.co/Keerthanah2002/rmj/resolve/main/sample_images/dog(11).jpg) ![14](https://huggingface.co/Keerthanah2002/rmj/resolve/main/sample_images/dog(1).jpg) ![15](https://huggingface.co/Keerthanah2002/rmj/resolve/main/sample_images/dog(8).jpg) ![16](https://huggingface.co/Keerthanah2002/rmj/resolve/main/sample_images/dog(10).jpg) ![17](https://huggingface.co/Keerthanah2002/rmj/resolve/main/sample_images/dog(4).jpg) ![18](https://huggingface.co/Keerthanah2002/rmj/resolve/main/sample_images/dog(15).jpg)
ULRs/llama-3-8b-topic-classification-ur
ULRs
2024-05-23T05:51:38Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "region:us" ]
null
2024-05-23T05:51:08Z
--- library_name: peft base_model: meta-llama/Meta-Llama-3-8B-Instruct --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
Harshithacj123/llama_classification
Harshithacj123
2024-05-23T05:47:58Z
9
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T19:56:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
roif123/mistral_7b-instruct-data4
roif123
2024-05-23T05:46:04Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-23T05:45:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
flipwooyoung/results
flipwooyoung
2024-05-23T05:45:31Z
166
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-base-960h", "base_model:finetune:facebook/wav2vec2-base-960h", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-23T05:45:04Z
--- license: apache-2.0 base_model: facebook/wav2vec2-base-960h tags: - generated_from_trainer model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2.dev0 - Tokenizers 0.19.1
kayfahaarukku/AingDiffusion-XL
kayfahaarukku
2024-05-23T05:37:24Z
0
7
null
[ "license:other", "region:us" ]
null
2024-03-20T12:05:42Z
--- license: other --- Backup page of [AingDiffusion XL's Civitai Model Card](https://civitai.com/models/226403) AingDiffusion XL (read: Ah-eeng Diffusion XL) merges anime models based on SDXL plus some extra self-trained datasets. This model is capable of generating high-quality anime images. The word "aing" came from informal Sundanese; it means "I" or "My". The name represents that this model produces images relevant to my taste. ## Guide to generating good images with this model (This guide is relevant to the latest version of AingDiffusion XL) - This model uses Animagine's way of prompting: `1girl/1boy, character name, from what series, what style, everything else in any order, masterpiece, best quality, very aesthetic, absurdres` It is also recommended to include the style name of the images you want to generate in the prompt (eg. anime screencap, sketch, pixel art, famous artist name, etc.). - The standard negative prompt recommended for this model: `nsfw, lowres, (bad), text, error, fewer, extra, missing, worst quality, jpeg artifacts, low quality, watermark, unfinished, displeasing, oldest, early, chromatic aberration, signature, watermark, artistic error, username, scan, [abstract],` - Use SDXL-recommended resolutions. This is SDXL, lower resolution always performs worse. - The recommended sampler is "Euler a" with ~7 CFG and ~25 steps. - Set ENSD (eta noise seed delta) to 31337 [to replicate image]. ## Character List AingDiffusion XL is a derivative of Animagine XL v3. The latest version of AingDiffusion XL is a derivative of Animagine XL 3.1, which has over 4.9k characters trained on it. You can see the full list here: https://huggingface.co/spaces/cagliostrolab/animagine-xl-3.1/blob/main/wildcard/characterfull.txt ## Legals Due to the usage of Animagine XL with the merge, the model is now using [Fair AI Public License 1.0-SD license](https://freedevproject.org/faipl-1.0-sd/).
germanchura/modelo_entrenado_01
germanchura
2024-05-23T05:34:18Z
24
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "base_model:distilbert/distilroberta-base", "base_model:finetune:distilbert/distilroberta-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-23T03:12:58Z
--- license: apache-2.0 base_model: distilroberta-base tags: - generated_from_trainer model-index: - name: modelo_entrenado_01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # modelo_entrenado_01 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4536 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 267 | 0.6835 | | 0.9199 | 2.0 | 534 | 0.6370 | | 0.9199 | 3.0 | 801 | 0.5897 | | 0.6357 | 4.0 | 1068 | 0.5777 | | 0.6357 | 5.0 | 1335 | 0.5880 | | 0.5711 | 6.0 | 1602 | 0.5634 | | 0.5711 | 7.0 | 1869 | 0.5716 | | 0.5481 | 8.0 | 2136 | 0.5407 | | 0.5481 | 9.0 | 2403 | 0.5352 | | 0.5204 | 10.0 | 2670 | 0.5153 | | 0.5204 | 11.0 | 2937 | 0.5037 | | 0.492 | 12.0 | 3204 | 0.4821 | | 0.492 | 13.0 | 3471 | 0.4890 | | 0.4854 | 14.0 | 3738 | 0.4826 | | 0.48 | 15.0 | 4005 | 0.4718 | | 0.48 | 16.0 | 4272 | 0.4758 | | 0.464 | 17.0 | 4539 | 0.4655 | | 0.464 | 18.0 | 4806 | 0.4870 | | 0.4575 | 19.0 | 5073 | 0.4544 | | 0.4575 | 20.0 | 5340 | 0.4559 | | 0.4484 | 21.0 | 5607 | 0.5187 | | 0.4484 | 22.0 | 5874 | 0.4987 | | 0.4414 | 23.0 | 6141 | 0.4673 | | 0.4414 | 24.0 | 6408 | 0.4795 | | 0.4323 | 25.0 | 6675 | 0.4692 | | 0.4323 | 26.0 | 6942 | 0.4749 | | 0.4333 | 27.0 | 7209 | 0.4828 | | 0.4333 | 28.0 | 7476 | 0.4351 | | 0.4313 | 29.0 | 7743 | 0.4405 | | 0.4292 | 30.0 | 8010 | 0.4614 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
juliuserictuliao/w2v-bert-2.0-tagalog-colab-CV16-4
juliuserictuliao
2024-05-23T05:31:16Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-23T05:31:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Hxjxxn95/distilbert-base-uncased-finetuned-emotion
Hxjxxn95
2024-05-23T05:25:35Z
119
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-23T05:21:23Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9275 - name: F1 type: f1 value: 0.9274166289407277 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2074 - Accuracy: 0.9275 - F1: 0.9274 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7869 | 1.0 | 250 | 0.2955 | 0.914 | 0.9134 | | 0.2338 | 2.0 | 500 | 0.2074 | 0.9275 | 0.9274 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
hgnoi/mSzMfGyRJQO8hBYC
hgnoi
2024-05-23T05:23:25Z
128
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T05:21:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DrNicefellow/microscopic-mamba-2.1B-hf-25.9ksteps
DrNicefellow
2024-05-23T05:15:46Z
5
0
transformers
[ "transformers", "pytorch", "mamba", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T03:27:44Z
--- license: apache-2.0 --- Self trained microscopic Mamba. Around 2.1G parameters. The tokenizer is the one from https://huggingface.co/state-spaces/mamba-2.8b-hf. It is being trained on around 400B tokens and this is step 25.9k. The evaluation is being conducted now. ## License This model is available under the Apache 2.0 License. ## Discord Server Join our Discord server [here](https://discord.gg/xhcBDEM3). ## Feeling Generous? ๐Ÿ˜Š Eager to buy me a cup of 2$ coffe or iced tea?๐Ÿตโ˜• Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
HariprasathSB/tamil-summarization-mt5-base
HariprasathSB
2024-05-23T04:57:10Z
120
0
transformers
[ "transformers", "tensorboard", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "base_model:google/mt5-base", "base_model:finetune:google/mt5-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-22T14:32:04Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: google/mt5-base model-index: - name: tamil-summarization-mt5-base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tamil-summarization-mt5-base This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-1b-pm-gen-ian-nd
AlignmentResearch
2024-05-23T04:52:36Z
144
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T04:50:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
pritiOli/mistral-fine-tuned-student-assessment
pritiOli
2024-05-23T04:48:57Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-23T04:48:47Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit --- # Uploaded model - **Developed by:** pritiOli - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Marupudi/dharaneesh-tiiuae-falcon-7b-v2
Marupudi
2024-05-23T04:45:52Z
0
0
transformers
[ "transformers", "safetensors", "autotrain", "text-generation-inference", "text-generation", "peft", "conversational", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T04:44:35Z
--- tags: - autotrain - text-generation-inference - text-generation - peft library_name: transformers widget: - messages: - role: user content: What is your favorite condiment? license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
AlignmentResearch/robust_llm_pythia-410m-pm-gen-ian-nd
AlignmentResearch
2024-05-23T04:44:22Z
144
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T04:43:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
xchgeaxeax/WhiteRabbitNeo-7B-v1.5a-GGUF
xchgeaxeax
2024-05-23T04:40:13Z
0
0
null
[ "license:other", "region:us" ]
null
2024-05-23T04:40:13Z
--- license: other license_name: deepseek license_link: https://huggingface.co/deepseek-ai/deepseek-coder-33b-base/raw/main/LICENSE ---
votepurchase/Starry-XL-v5.2
votepurchase
2024-05-23T04:39:40Z
313
4
diffusers
[ "diffusers", "safetensors", "anime", "stable-diffusion-xl", "text-to-image", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-05-23T04:39:40Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd language: - en library_name: diffusers pipeline_tag: text-to-image tags: - anime - stable-diffusion-xl - safetensors --- <style> .title-container { display: flex; justify-content: center; align-items: center; height: 80vh; /* Adjust this value to position the title vertically */ } .title { font-size: 1.5em; text-align: center; color: #333; font-family: 'Helvetica Neue', sans-serif; text-transform: uppercase; letter-spacing: 0.1em; padding: 0.5em 0; background: transparent; } .title span { background: -webkit-linear-gradient(45deg, #FFBF00, #F28C28); -webkit-background-clip: text; -webkit-text-fill-color: transparent; } </style> <h1 class="title"><span>Starry XL 5.2</span></h1> ## Model Information - Developed by: [kitarz](https://civitai.com/user/kitarz) - Funded by: kitarz - Model type: SDXL 1.0 - Finetuned from: [Kohaku-XL Epsilon](https://civitai.com/models/399873/kohaku-xl-epsilon) - License: Fair AI Public License 1.0-SD > [!WARNING] > This is a not the offical model page of this model's author ## Usages ๐Ÿช„ **Try Starry XL Demo here:** https://huggingface.co/spaces/eienmojiki/StarryXL-Demo > Starry is based on epsilon, and during training, the caption are overall close to Kohaku epsilon, so the overall usage is the same ### Artist wildcard **There is a wildcard for 600 artists here:** [starry_aritst_600_list](https://civitai.com/api/download/models/499498?type=Training%20Data) for other artists and characters, please use the existing list from Kohaku Epsilon. https://civitai.com/api/download/models/445973?type=Training%20Data > [!IMPORTANT] > **Note that Starry requires high accuracy in artist names, so ensure there are no spelling errors and use the correct artist/character tags.** ### Prompt format ``` <1girl/1boy/1other/...>, <character>, <series>, <artists>, <general tags>, <quality tags>, <year tags>, <meta tags>, <rating tags> ``` - Quality tags: masterpiece, best quality, great quality, good quality, normal quality, low quality, worst quality - Rating tags: safe, sensitive, nsfw, explicit - Date tags: newest, recent, mid, early, old ### Recommended Negative Prompt - **Long** ``` bad anatomy,blurry,(worst quality:1.8),low quality,hands bad,face bad,(normal quality:1.3),bad hands,mutated hands and fingers,extra legs,extra arms,duplicate,cropped,text,jpeg,artifacts,signature,watermark,username,blurry,artist name,trademark,title,multiple view,Reference sheet,long body,multiple breasts,mutated,bad anatomy,disfigured,bad proportions,duplicate,bad feet,artist name,ugly,text font ui,missing limb,monochrome, ``` - **Short** ``` nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, ``` ### Style Select You can directly use artist's prompt to generate image. ``` 1girl,momoi \(blue archive\), blue archive, ``` ``` {style}, ``` ``` solo, headphones, halo, pink halo, white jacket, short hair, bow, shirt, necktie, white background, white shirt, blue necktie, fake animal ears, animal ears, pink bow, collared shirt, simple background, pink eyes, blonde hair, animal ear headphones, looking at viewer, hair bow, jacket,newest, masterpiece, best quality, absurdres, highres, ``` ![img1](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/9593866e-4471-4e8d-91a9-bdaead3d6f3b/width=525/9593866e-4471-4e8d-91a9-bdaead3d6f3b.jpeg) ![img2](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/def8091f-ab81-495e-92f1-640322d2e879/width=525/def8091f-ab81-495e-92f1-640322d2e879.jpeg) ![img3](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/2150ba87-7a69-4968-961e-733ce807bcba/width=525/2150ba87-7a69-4968-961e-733ce807bcba.jpeg) ![img4](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/964ac69e-29db-4e03-adc3-beb3fa794a8c/width=525/964ac69e-29db-4e03-adc3-beb3fa794a8c.jpeg) ![img5](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/964ac69e-29db-4e03-adc3-beb3fa794a8c/width=525/964ac69e-29db-4e03-adc3-beb3fa794a8c.jpeg) ![img6](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/964ac69e-29db-4e03-adc3-beb3fa794a8c/width=525/964ac69e-29db-4e03-adc3-beb3fa794a8c.jpeg) ### Enhance your generation 1. You can use [DanTagGen](https://github.com/KohakuBlueleaf/z-a1111-sd-webui-dtg) to generate images with a strong style from an artist > Try DanTagGen on HuggingFace: https://huggingface.co/spaces/KBlueLeaf/DTG-demo ``` 1girl,{style}, {dtg expand} newest, masterpiece, best quality, absurdres, highres, ``` 2. Artists Combination Combining multiple artists is highly recommended, and you can use the artist list to try different orders and combinations. *In fact, you can use the famous nai3 artist prompts to combine styles directly. (This is not a simple nai3 distillation, it uses artist prompts for style combine)* ``` (ningen mame:0.9), ciloranko, sho \(sho lwlw\), (tianliang duohe fangdongye:0.8), ask \(askzy\), wlop, ``` ![img7](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/7c6bbb3f-7b2e-4b33-934c-3670395130d2/width=525/7c6bbb3f-7b2e-4b33-934c-3670395130d2.jpeg) ![img8](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/92381267-f5c8-4acb-ab49-df1cd5e9b1d6/width=525/92381267-f5c8-4acb-ab49-df1cd5e9b1d6.jpeg) ## License This model is released under Fair-AI-Public-License-1.0-SD Please check this website for more information: Freedom of Development (freedevproject.org)
votepurchase/artiwaifu-diffusion-1.0
votepurchase
2024-05-23T04:39:28Z
294
2
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-05-23T04:39:28Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en tags: - text-to-image - stable-diffusion - safetensors - stable-diffusion-xl - diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 pipeline_tag: text-to-image --- <h1 align="center"><strong style="font-size: 48px;">ArtiWaifu Diffusion 1.0</strong></h1> <p align="center"> <img src="https://i.postimg.cc/RFN05PW0/1.png" alt="alt text" title="Cover" width="450"/> </p> We have released the **A**rti**Wa**ifu Diffusion V1.0 model, designed to generate aesthetically pleasing and faithfully restored anime-style illustrations. The AWA Diffusion is an iteration of the Stable Diffusion XL model, mastering over 6000 artistic styles and more than 4000 anime characters, generating images through [trigger words](#trigger-words). As a specialized image generation model for anime, it excels in producing high-quality anime images, especially in generating images with highly recognizable styles and characters while maintaining a consistently high-quality aesthetic expression. ## Model Details The AWA Diffusion model is fine-tuned from Stable Diffusion XL, with a selected dataset of 1.5M high-quality anime images, covering a wide range of both popular and niche anime concepts up to April 15, 2024. AWA Diffusion employs our most advanced training strategies, enabling users to easily induce the model to generate images of specific characters or styles while maintaining high image quality and aesthetic expression. **Model Information** - Developed by: [Euge](https://civitai.com/user/Euge_) - Funded by: [Neta.art](https://nieta.art/) - Model type: Generative text-to-image model - Finetuned from model: [SDXL 1.0 Base](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) - License: [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) ## Usage Guide This guide will (i) introduce the model's recommended usage methods and prompt writing strategies, aiming to provide suggestions for generation, and (ii) serve as a reference document for model usage, detailing the writing patterns and strategies for trigger words, quality tags, rating tags, style tags, and character tags. ### Basic Usage - **CFG scale**: <span style="color:cyan">5-11</span> - **Resolution**: Area (= width x height) around 1024x1024. Not lower than 256x256, and resolutions where both length and width are multiples of 32. - **Sampling method**: Euler A (<span style="color:cyan">50+</span> steps) or DPM++ 2M Karras (<span style="color:cyan">~35</span> steps) Due to the special training method, AWA's optimal inference step count is higher than regular values. As the inference steps increase, the quality of the generated images can continue to improve... โ“ **Question:** Why not use the standard SDXL resolution? ๐Ÿ’ก **Answer:** Because the bucketing algorithm used in training does not adhere to a fixed set of buckets. Although this does not conform to positional encoding, we have not observed any adverse effects. ### Prompting Strategies All text-to-image diffusion models have a notoriously high sensitivity to prompt, and AWA Diffusion is no exception. Even a misspelling in the prompt, or even replacing spaces with underscores, can affect the generated results. AWA Diffusion encourages users to write prompt in **tags** separated by **comma + space (`, `)**. Although the model also supports natural language descriptions as prompt, or an intermix of both, the tag-by-tag format is more stable and user-friendly. When describing a specific ACG concept, such as a character, style, or scene, we recommend users choose tags from the [Danbooru tags](https://danbooru.donmai.us/tags) and replace underscores in the Danbooru tags with spaces to ensure the model accurately understands your needs. For example, `bishop_(chess)` should be written as `bishop (chess)`, and in inference tools like AUTOMATIC1111 WebUI that use parentheses to weight prompt, all parentheses within the tags should be escaped, i.e., `bishop \(chess\)`. #### Tag Ordering Including AWA Diffusion, most diffusion models better understand logically ordered tags. While tag ordering is not mandatory, it can help the model better understand your needs. Generally, the earlier the tag in the order, the greater its impact on generation. Here's an example of tag ordering. The example organizes the order of tags, prepends [art style tags](#style-tags) and [character tags](#character-tags) because style and subject are the most important to the image. Subsequently, other tags are added in order of importance. Lastly, [aesthetic tags](#aesthetic-tags) and [quality tags](#quality-tags) are positioned at the end to further emphasize the aesthetics of the image. art style (<span style="color:red">_by xxx_</span>) -> character (<span style="color:orange">_1 frieren (sousou no frieren)_</span>) -> race (elf) -> composition (cowboy shot) -> painting style (<span style="color:green">_impasto_</span>) -> theme (fantasy theme) -> main environment (in the forest, at day) -> background (gradient background) -> action (sitting on ground) -> expression (expressionless) -> main characteristics (white hair) -> other characteristics (twintails, green eyes, parted lip) -> clothing (wearing a white dress) -> clothing accessories (frills) -> other items (holding a magic wand) -> secondary environment (grass, sunshine) -> aesthetics (<span style="color:blue">_beautiful color_</span>, <span style="color:cyan">_detailed_</span>) -> quality (<span style="color:purple">_best_</span> quality) -> secondary description (birds, cloud, butterfly) Tag order is not set in stone. Flexibility in writing prompt can yield better results. For example, if the effect of a concept (such as style) is too strong and detracts from the aesthetic appeal of the image, you can move it to a later position to reduce its impact. #### Negative Prompt Negative prompt are not necessary for AWA Diffusion. If you use negative prompt, it is not the case that the more negative prompt, the better. They should be **as concise as possible and easily recognizable by the model**. Too many negative words may lead to poorer generation results. Here are some recommended scenarios for using negative prompt: 1. Watermark: `signature`, `logo`, `artist name`; 2. Quality: `worst quality`, `lowres`, `ugly`, `abstract`; 3. Style: `real life`, `3d`, `celluloid`, `sketch`, `draft`; 4. Human anatomy: `deformed hand`, `fused fingers`, `extra limbs`, `extra arms`, `missing arm`, `extra legs`, `missing leg`, `extra digits`, `fewer digits`. ### Trigger Words Add trigger words to your prompts to inform the model about the concept you want to generate. Trigger words can include character names, artistic styles, scenes, actions, quality, etc. **Tips for Trigger Word** 1. **Typos**: The model is very sensitive to the spelling of trigger words. Even a single letter difference can cause a trigger to fail or lead to unexpected results. 2. **Bracket Escaping**: Pay attention when using inference tools that rely on parentheses for weighting prompt, such as AUTOMATIC1111 WebUI, to escape parentheses in trigger words, e.g., `1 lucy (cyberpunk)` -> `1 lucy \(cyberpunk\)`. 3. **Triggering Effect Preview**๏ผšThrough searching tags on [Danbooru](https://danbooru.donmai.us/tags) to preview the tag and better understand the tag's meaning and usage. #### Style Tags Style tags are divided into two types: <span style="color:red">Painting Style Tags</span> and <span style="color:blue">Artistic Style Tags</span>. <span style="color:red">Painting Style Tags</span> describe the painting techniques or media used in the image, such as oil painting, watercolor, flat color, and impasto. <span style="color:blue">Artistic Style Tags</span> represent the artistic style of the artist behind the image. AWA Diffusion supports the following <span style="color:red">Painting Style Tags</span>: - Painting style tags available in the Danbooru tags, such as `oil painting`, `watercolor`, `flat color`, etc.; - All painting style tags supported by [AID XL 0.8](https://civitai.com/models/124189/anime-illust-diffusion-xl), such as `flat-pasto`, etc.; - All style tags supported by [Neta Art XL 1.0](https://civitai.com/models/410737/neta-art-xl), such as `gufeng`, etc.; See the [Painting Style Tags List](https://huggingface.co/Eugeoter/artiwaifu-diffusion-1.0/blob/main/references/style.csv) for full lists of painting style tags. AWA Diffusion supports the following <span style="color:blue">Artistic Style Tags</span>: - Artistic style tags available in the Danbooru tags, such as `by yoneyama mai`, `by wlop`, etc.; - All artistic style tags supported by [AID XL 0.8](https://civitai.com/models/124189/anime-illust-diffusion-xl), such as `by antifreeze3`, `by 7thknights`, etc.; See the [Artistic Style Tags List](https://huggingface.co/Eugeoter/artiwaifu-diffusion-1.0/blob/main/references/artist.csv) for full lists of artistic style tags. The higher the tag count in the tag repository, the more thoroughly the artistic style has been trained, and the higher the fidelity in generation. Typically, artistic style tags with a count higher than **50** yield better generation results. **Tips for Style Tag** 1. **Intensity Adjustment**: You can adjust the intensity of a style by altering the order or weighting of style tags in your prompt. Frontloading a style tag enhances its effect, while placing it later reduces its effect. โ“ **Question:** Why include the prefix `by` in artistic style tags? ๐Ÿ’ก **Answer:** To clearly inform the model that you want to generate a specific artistic style rather than something else, we recommend including the prefix `by` in artistic style tags. This differentiates `by xxx` from `xxx`, especially when `xxx` itself carries other meanings, such as `dino` which could represent either a dinosaur or an artist's identifier. Similarly, when triggering characters, add a `1` as a prefix to the character trigger word. #### Character Tags Character tags describe the character IP in the generated image. Using character tags will guide the model to generate the **appearance features** of the character. Character tags also need to be sourced from the [Character Tag List](https://huggingface.co/Eugeoter/artiwaifu-diffusion-1.0/blob/main/references/character.csv). To generate a specific character, first find the corresponding trigger word in the tag repository, replace all underscores `_` in the trigger word with spaces ` `, and prepend `1 ` to the character name. For example, `1 ayanami rei` triggers the model to generate the character Rei Ayanami from the anime "EVA," corresponding to the Danbooru tag `ayanami_rei`; `1 asuna (sao)` triggers the model to generate the character Asuna from "Sword Art Online," corresponding to the Danbooru tag `asuna_(sao)`. [More examples](#examples) The higher the tag count in the tag repository, the more thoroughly the character has been trained, and the higher the fidelity in generation. Typically, character tags with a count higher than **100** yield better generation results. **Tips for Character Tag** 1. **Character Costuming**: To achieve more flexible character costuming, character tags do not deliberately guide the model to draw the official attire of the character. To generate a character in a specific official outfit, besides the trigger word, you should also include a description of the attire in the prompt, e.g., "1 lucy (cyberpunk), <span style="color:cyan">wearing a white cropped jacket, underneath bodysuit, shorts, thighhighs, hip vent</span>". 2. **Series Annotations**: Some character tags include additional parentheses annotations after the character name. The parentheses and the annotations within cannot be omitted, e.g., `1 lucy (cyberpunk)` cannot be written as `1 lucy`. Other than that, you don't need to add any additional annotations, for example, you DON'T need to add the series tag to which the character belongs after the character tag. 3. **Known Issue 1**: When generating certain characters, mysterious feature deformations may occur, e.g., `1 asui tsuyu` triggering the character Tsuyu Asui from "My Hero Academia" may result in an extra black line between the eyes. This is because the model incorrectly interprets the large round eyes as glasses, thus `glasses` should be included in the negative prompt to avoid this issue. 4. **Known Issue 2**: When generating less popular characters, AWA Diffusion might produce images with incomplete feature restoration due to insufficient data/training. In such cases, we recommend that you extend the character description in your prompt beyond just the character name, detailing the character's origin, race, hair color, attire, etc. **Character Tag Trigger Examples** | Trigger Word | Note | | ------------------------------- | -------------------------------------------------------------- | | 1 lucy (cyberpunk) | โœ… Correct character tag | | 1 lucy | โŒ Missing bracket annotation | | 1 lucy (cyber) | โŒ Incorrect bracket annotation | | lucy (cyberpunk) | โŒ Missing prefix `1 ` | | 1 lucy cyberpunk | โŒ Missing brackets | | 1 lucy (cyberpunk | โŒ Bracket not closed | | 1 lucky (cyberpunk) | โŒ Spelling error | | 1 lucy (cyberpunk: edgerunners) | โŒ Bracket annotation not following the required character tag | โ“ **Question:** Why do some character tags contain bracket annotations, e.g., `lucy (cyberpunk)`, while others do not, e.g., `frieren`? ๐Ÿ’ก **Answer:** In different works, there may be characters with the same name, such as Asuna from "Sword Art Online" and "Blue Archive". To distinguish these characters with the same name, it is necessary to annotate the character's name with the work's name, abbreviated if the name is too long. For characters with unique names that currently have no duplicates, like `frieren`, no special annotations are required. Here is an example: #### Quality Tags and Aesthetic Tags For AWA Diffusion, including quality descriptors in your positive prompt is **very important**. Quality descriptions relate to quality tags and aesthetic tags. Quality tags directly describe the aesthetic quality of the generated image, impacting the detail, texture, human anatomy, lighting, color, etc. Adding quality tags helps the model generate higher quality images. Quality tags are ranked from highest to lowest as follows: <span style="color:orange">amazing quality</span> -> <span style="color:purple">best quality</span> -> <span style="color:blue">high quality</span> -> <span style="color:green">normal quality</span> -> low quality -> <span style="color:grey">worst quality</span> Aesthetic tags describe the aesthetic features of the generated image, aiding the model in producing artistically appealing images. In addition to typical aesthetic words like `perspective`, `lighting and shadow`, AWA Diffusion has been specially trained to respond effectively to aesthetic trigger words such as `beautiful color`, `detailed`, and `aesthetic`, which respectively express appealing colors, details, and overall beauty. The recommended generic way to describe quality is: _(Your Prompt), <span style="color:orange">beautiful color, detailed, amazing quality</span>_ **Tips for Quality and Aesthetic Tags** 1. **Tag Quantity**: Only one quality tag is needed; multiple aesthetic tags can be added. 2. **Tag Position**: The position of quality and aesthetic tags is not fixed, but they are typically placed at the end of the prompt. 3. **Relative Quality**: There is no absolute hierarchy of quality; the implied quality aligns with general aesthetic standards, and different users may have different perceptions of quality. #### Rating Tags Rating tags describe the level of exposure in the content of the generated image. Rating tags are ranked from highest to lowest as follows: <span style="color:green">rating: general</span> (or <span style="color:green">safe</span>) -> <span style="color:yellow">rating: suggestive</span> -> <span style="color:orange">rating: questionable</span> -> <span style="color:red">rating: explicit</span> (or <span style="color:red">nsfw</span>) ### Prompt Word Examples #### Example 1 **A** _<span style="color:green">by yoneyama mai</span>, <span style="color:blue">1 frieren</span>, 1girl, solo, fantasy theme, smile, holding a magic wand, <span style="color:yellow">beautiful color</span>, <span style="color:red">amazing quality</span>_ 1. <span style="color:green">by yoneyama mai</span> triggers the artistic style of Yoneyama Mai, placed at the front to enhance the effect. 2. <span style="color:blue">1 frieren</span> triggers the character Frieren from the series "Frieren at the Funeral." 3. <span style="color:yellow">beautiful color</span> describes the beautiful colors in the generated image. 4. <span style="color:red">amazing quality</span> describes the stunning quality of the generated image. **B** _<span style="color:green">by nixeu</span>, <span style="color:blue">1 lucy (cyberpunk)</span>, 1girl, solo, cowboy shot, gradient background, white cropped jacket, underneath bodysuit, shorts, thighhighs, hip vent, <span style="color:yellow">detailed</span>, <span style="color:red">best quality</span>_ #### Example 2: Style Mixing By layering multiple different style tags, you can generate images with features of multiple styles. **A** Simple Mixing _**<span style="color:green">by ningen mame</span>, <span style="color:cyan">by ciloranko</span>, <span style="color:blue">by sho (sho lwlw)</span>**, 1girl, 1 hatsune miku, sitting, arm support, smile, detailed, amazing quality_ **B** Weighted Mixing Using AUTOMATIC1111 WebUI prompt weighting syntax (parentheses weighting), weight different style tags to better control the generated image's style. _**<span style="color:green">(by ningen mame:0.8)</span>, <span style="color:cyan">(by ciloranko:1.1)</span>, <span style="color:blue">(by sho \(sho lwlw\):1.2)</span>**, 1girl, 1 hatsune miku, sitting, arm support, smile, detailed, amazing quality_ #### Example 3: Multi-Character Scenes By adding multiple character tags to your prompts, you can generate images with multiple characters in the same frame. Compared to other similar models, AWA performs better in multi-character scenes but remains unstable. **A** Mixed Gender Scene _**1girl and 1boy, <span style="color:blue">1 ganyu</span> girl, <span style="color:cyan">1 gojou satoru</span> boy**, beautiful color, amazing quality_ **B** Same Gender Scene _**2girls, <span style="color:blue">1 ganyu</span> girl, <span style="color:orange">1 yoimiya</span> girl**, beautiful color, amazing quality_ ## Future Work AWA Diffusion is expected to combine high-level <span style="color:purple">aesthetics</span> with comprehensive <span style="color:cyan">knowledge</span>. It should neither have the traditional AI's greasy feel nor become a knowledge-deficient vase. We will continue to explore more advanced training techniques and strategies, consistently improving the model's quality. ## Support Us Training AWA Diffusion incurs substantial costs. If you appreciate our work, please consider supporting us through [Ko-fi](https://ko-fi.com/eugeai), to aid our research and development efforts. Thank you for your like and support!
AlignmentResearch/robust_llm_pythia-160m-pm-gen-ian-nd
AlignmentResearch
2024-05-23T04:39:20Z
144
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T04:38:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
votepurchase/PerfectDeliberate-Anime_v2
votepurchase
2024-05-23T04:38:57Z
310
2
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-05-23T04:38:57Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://civitai.com/models/111274?modelVersionId=307086
Vk357/distilbert-base-uncased-finetuned-cola
Vk357
2024-05-23T04:38:10Z
62
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-21T04:58:52Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: Vk357/distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Vk357/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5146 - Validation Loss: 0.4586 - Train Matthews Correlation: 0.4722 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5146 | 0.4586 | 0.4722 | 0 | ### Framework versions - Transformers 4.41.0 - TensorFlow 2.15.0 - Datasets 2.19.1 - Tokenizers 0.19.1
AlignmentResearch/robust_llm_pythia-70m-pm-gen-ian-nd
AlignmentResearch
2024-05-23T04:37:58Z
142
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T04:37:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AlignmentResearch/robust_llm_pythia-31m-pm-gen-ian-nd
AlignmentResearch
2024-05-23T04:37:57Z
142
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T04:37:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hgnoi/Ve07K2MWletV660U
hgnoi
2024-05-23T04:37:56Z
128
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T04:36:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
votepurchase/pony
votepurchase
2024-05-23T04:37:26Z
443
0
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-05-23T04:37:26Z
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # Pony API Inference ![generated from modelslab.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/4539544541710266402.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "pony" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs) Try model for free: [Generate Images](https://modelslab.com/models/pony) Model link: [View model](https://modelslab.com/models/pony) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "pony", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
dewifaj/alzheimer_classification
dewifaj
2024-05-23T04:34:27Z
216
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-23T04:34:09Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - f1 model-index: - name: alzheimer_classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # alzheimer_classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3183 - F1: 0.8946 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 128 | 0.8686 | 0.5548 | | No log | 2.0 | 256 | 0.8457 | 0.6087 | | No log | 3.0 | 384 | 0.7396 | 0.6478 | | 0.8172 | 4.0 | 512 | 0.6833 | 0.6826 | | 0.8172 | 5.0 | 640 | 0.6280 | 0.7205 | | 0.8172 | 6.0 | 768 | 0.5347 | 0.7727 | | 0.8172 | 7.0 | 896 | 0.5108 | 0.7909 | | 0.5292 | 8.0 | 1024 | 0.4707 | 0.8078 | | 0.5292 | 9.0 | 1152 | 0.4477 | 0.8302 | | 0.5292 | 10.0 | 1280 | 0.4075 | 0.8511 | | 0.5292 | 11.0 | 1408 | 0.4263 | 0.8380 | | 0.3498 | 12.0 | 1536 | 0.3558 | 0.8756 | | 0.3498 | 13.0 | 1664 | 0.3768 | 0.8558 | | 0.3498 | 14.0 | 1792 | 0.3412 | 0.8701 | | 0.3498 | 15.0 | 1920 | 0.3028 | 0.8952 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
morgpaul/MidJ
morgpaul
2024-05-23T04:33:08Z
0
0
null
[ "region:us" ]
null
2024-05-23T04:32:48Z
--- license: other license_name: midjourney-prompts license_link: LICENSE ---from datasets import load_dataset dataset = load_dataset("vivym/midjourney-prompts")
khaingsmon/xho-2
khaingsmon
2024-05-23T04:31:08Z
78
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:ml-superb-subset", "base_model:Akashpb13/Swahili_xlsr", "base_model:finetune:Akashpb13/Swahili_xlsr", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-23T03:14:20Z
--- license: apache-2.0 base_model: Akashpb13/Swahili_xlsr tags: - generated_from_trainer datasets: - ml-superb-subset metrics: - wer model-index: - name: xho-2 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: ml-superb-subset type: ml-superb-subset config: xho split: test args: xho metrics: - name: Wer type: wer value: 0.6780487804878049 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xho-2 This model is a fine-tuned version of [Akashpb13/Swahili_xlsr](https://huggingface.co/Akashpb13/Swahili_xlsr) on the ml-superb-subset dataset. It achieves the following results on the evaluation set: - Loss: 0.7671 - Wer: 0.6780 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-------:|:----:|:---------------:|:------:| | 0.033 | 15.3846 | 400 | 0.7671 | 0.6780 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
leo009/mistral-7b-v3
leo009
2024-05-23T04:29:22Z
10
0
transformers
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-v0.3-bnb-4bit", "base_model:quantized:unsloth/mistral-7b-v0.3-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-23T04:22:47Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - gguf base_model: unsloth/mistral-7b-v0.3-bnb-4bit --- # Uploaded model - **Developed by:** leo009 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
NakanoMiku0-0/llama3-patent-finetune-text
NakanoMiku0-0
2024-05-23T04:20:49Z
10
0
transformers
[ "transformers", "gguf", "llama", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-23T00:25:38Z
--- license: apache-2.0 ---
johaanm/lora_model
johaanm
2024-05-23T04:17:52Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-23T04:17:22Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** johaanm - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
hgnoi/8JE4ridHS2ENwY0l
hgnoi
2024-05-23T04:15:08Z
130
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T04:13:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
quangtqv/bi_encoder_tool_learning_best_model_23_5_2024
quangtqv
2024-05-23T04:15:00Z
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-05-23T04:14:46Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # quangtqv/bi_encoder_tool_learning_best_model_23_5_2024 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('quangtqv/bi_encoder_tool_learning_best_model_23_5_2024') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=quangtqv/bi_encoder_tool_learning_best_model_23_5_2024) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
nitus-ac/nMer3a
nitus-ac
2024-05-23T04:12:31Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:Undi95/Llama-3-LewdPlay-8B", "base_model:merge:Undi95/Llama-3-LewdPlay-8B", "base_model:ajibawa-2023/General-Stories-Mistral-7B", "base_model:merge:ajibawa-2023/General-Stories-Mistral-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T03:54:25Z
--- base_model: - ajibawa-2023/General-Stories-Mistral-7B - Undi95/Llama-3-LewdPlay-8B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [ajibawa-2023/General-Stories-Mistral-7B](https://huggingface.co/ajibawa-2023/General-Stories-Mistral-7B) * [Undi95/Llama-3-LewdPlay-8B](https://huggingface.co/Undi95/Llama-3-LewdPlay-8B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: ajibawa-2023/General-Stories-Mistral-7B - model: Undi95/Llama-3-LewdPlay-8B merge_method: slerp base_model: ajibawa-2023/General-Stories-Mistral-7B dtype: bfloat16 parameters: t: [0, 0.5, 0.3, 0.7, 1] ```
pritiOli/llama3-fine-tuned-student-assessment
pritiOli
2024-05-23T04:06:25Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-22T19:42:31Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** pritiOli - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
xkiwilabs/lora_opComms_LLama3_v5
xkiwilabs
2024-05-23T04:02:58Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-23T04:02:37Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** xkiwilabs - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
emily49/frozen-stable-diffusion-non-inpaint
emily49
2024-05-23T04:00:42Z
1
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:stabilityai/stable-diffusion-2-1-base", "base_model:finetune:stabilityai/stable-diffusion-2-1-base", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-05-22T22:42:13Z
--- license: creativeml-openrail-m library_name: diffusers tags: - text-to-image - dreambooth - diffusers-training - stable-diffusion - stable-diffusion-diffusers - text-to-image - dreambooth - diffusers-training - stable-diffusion - stable-diffusion-diffusers base_model: stabilityai/stable-diffusion-2-1-base inference: true instance_prompt: a photo of sks frozen --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - emily49/frozen-stable-diffusion-non-inpaint This is a dreambooth model derived from stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks frozen using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
Warriorman3d5/H
Warriorman3d5
2024-05-23T04:00:24Z
0
0
null
[ "license:artistic-2.0", "region:us" ]
null
2024-05-23T04:00:24Z
--- license: artistic-2.0 ---
GENIAC-Team-Ozaki/lora-dpo-finetuned-stage4-sft-multitest_0.1_5e-7_ep-15
GENIAC-Team-Ozaki
2024-05-23T03:58:19Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T03:48:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BotCuddles/llama3_try2
BotCuddles
2024-05-23T03:55:06Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-23T03:54:46Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** BotCuddles - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
reinel215/whisper-small-panita-v1
reinel215
2024-05-23T03:52:41Z
79
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-22T17:30:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hgnoi/kYf3FHtCczd5iRD4
hgnoi
2024-05-23T03:52:22Z
128
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T03:50:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
JayhC/L3_SnowStorm_4x8B-8bpw-h8-exl2
JayhC
2024-05-23T03:51:33Z
7
1
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "conversational", "en", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "exl2", "region:us" ]
text-generation
2024-05-23T01:47:42Z
--- license: llama3 tags: - moe language: - en --- <br/><br/> 8bpw/h8 exl2 quantization of [xxx777xxxASD/L3_SnowStorm_4x8B](https://huggingface.co/xxx777xxxASD/L3_SnowStorm_4x8B) using exllamav2 0.0.21 and default calibration dataset. --- **ORIGINAL CARD:** <style> .image-container { position: relative; display: inline-block; } .image-container img { display: block; border-radius: 10px; box-shadow: 0 0 1px rgba(0, 0, 0, 0.3); } .image-container::before { content: ""; position: absolute; top: 0px; left: 20px; width: calc(100% - 40px); height: calc(100%); background-image: url("https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/OuMe79ZQPdCX01rTdfgXn.png"); background-size: cover; filter: blur(10px); z-index: -1; } </style> <br> <div class="image-container"> <img src="https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/OuMe79ZQPdCX01rTdfgXn.png" style="width: 96%; margin: auto;" > </div> (Maybe i'll change the waifu picture later) > [!NOTE] > [GGUF quants](https://huggingface.co/collections/xxx777xxxASD/snowstorm-4x8b-664b52a1d2a12e515efb5680) Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than Mixtral 8x7B and it's finetunes in RP/ERP tasks. ### Llama 3 SnowStorm 4x8B ``` base_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1-OAS gate_mode: random dtype: bfloat16 experts_per_token: 2 experts: - source_model: ChaoticNeutrals_Poppy_Porpoise-v0.7-L3-8B - source_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1-OAS - source_model: openlynn_Llama-3-Soliloquy-8B-v2 - source_model: Sao10K_L3-8B-Stheno-v3.1 ``` ## Models used - [ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B) - [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS) - [openlynn/Llama-3-Soliloquy-8B-v2](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v2) - [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1) ## Difference(from ChaoticSoliloquy v1.5) - Update from [NeverSleep/Llama-3-Lumimaid-8B-v0.1](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1) to [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS) - Update from [openlynn/Llama-3-Soliloquy-8B-v1](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v1) to [openlynn/Llama-3-Soliloquy-8B-v2](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v2) - Update from [Sao10K/L3-Solana-8B-v1](https://huggingface.co/Sao10K/L3-Solana-8B-v1) to [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1) ## Vision [llama3_mmproj](https://huggingface.co/ChaoticNeutrals/LLaVA-Llama-3-8B-mmproj-Updated) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/yv4C6NalqORLjvY3KKZk8.png) ## Prompt format: Llama 3
votepurchase/animagine-xl-3.1
votepurchase
2024-05-23T03:44:17Z
10,813
2
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "en", "base_model:cagliostrolab/animagine-xl-3.0", "base_model:finetune:cagliostrolab/animagine-xl-3.0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-05-23T03:44:16Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en tags: - text-to-image - stable-diffusion - safetensors - stable-diffusion-xl base_model: cagliostrolab/animagine-xl-3.0 widget: - text: 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck, masterpiece, best quality, very aesthetic, absurdes parameter: negative_prompt: nsfw, lowres, (bad), text, error, fewer, extra, missing, worst quality, jpeg artifacts, low quality, watermark, unfinished, displeasing, oldest, early, chromatic aberration, signature, extra digits, artistic error, username, scan, [abstract] example_title: 1girl - text: 1boy, male focus, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck, masterpiece, best quality, very aesthetic, absurdes parameter: negative_prompt: nsfw, lowres, (bad), text, error, fewer, extra, missing, worst quality, jpeg artifacts, low quality, watermark, unfinished, displeasing, oldest, early, chromatic aberration, signature, extra digits, artistic error, username, scan, [abstract] example_title: 1boy --- <style> .title-container { display: flex; justify-content: center; align-items: center; height: 100vh; /* Adjust this value to position the title vertically */ } .title { font-size: 2.5em; text-align: center; color: #333; font-family: 'Helvetica Neue', sans-serif; text-transform: uppercase; letter-spacing: 0.1em; padding: 0.5em 0; background: transparent; } .title span { background: -webkit-linear-gradient(45deg, #7ed56f, #28b485); -webkit-background-clip: text; -webkit-text-fill-color: transparent; } .custom-table { table-layout: fixed; width: 100%; border-collapse: collapse; margin-top: 2em; } .custom-table td { width: 50%; vertical-align: top; padding: 10px; box-shadow: 0px 0px 0px 0px rgba(0, 0, 0, 0.15); } .custom-image-container { position: relative; width: 100%; margin-bottom: 0em; overflow: hidden; border-radius: 10px; transition: transform .7s; /* Smooth transition for the container */ } .custom-image-container:hover { transform: scale(1.05); /* Scale the container on hover */ } .custom-image { width: 100%; height: auto; object-fit: cover; border-radius: 10px; transition: transform .7s; margin-bottom: 0em; } .nsfw-filter { filter: blur(8px); /* Apply a blur effect */ transition: filter 0.3s ease; /* Smooth transition for the blur effect */ } .custom-image-container:hover .nsfw-filter { filter: none; /* Remove the blur effect on hover */ } .overlay { position: absolute; bottom: 0; left: 0; right: 0; color: white; width: 100%; height: 40%; display: flex; flex-direction: column; justify-content: center; align-items: center; font-size: 1vw; font-style: bold; text-align: center; opacity: 0; /* Keep the text fully opaque */ background: linear-gradient(0deg, rgba(0, 0, 0, 0.8) 60%, rgba(0, 0, 0, 0) 100%); transition: opacity .5s; } .custom-image-container:hover .overlay { opacity: 1; } .overlay-text { background: linear-gradient(45deg, #7ed56f, #28b485); -webkit-background-clip: text; color: transparent; text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.7); .overlay-subtext { font-size: 0.75em; margin-top: 0.5em; font-style: italic; } .overlay, .overlay-subtext { text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.5); } </style> <h1 class="title"> <span>Animagine XL 3.1</span> </h1> <table class="custom-table"> <tr> <td> <div class="custom-image-container"> <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/yq_5AWegnLsGyCYyqJ-1G.png" alt="sample1"> </div> <div class="custom-image-container"> <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/sp6w1elvXVTbckkU74v3o.png" alt="sample4"> </div> </td> <td> <div class="custom-image-container"> <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/OYBuX1XzffN7Pxi4c75JV.png" alt="sample2"> </div> <div class="custom-image-container"> <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/ytT3Oaf-atbqrnPIqz_dq.png" alt="sample3"> </td> <td> <div class="custom-image-container"> <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/0oRq204okFxRGECmrIK6d.png" alt="sample1"> </div> <div class="custom-image-container"> <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/DW51m0HlDuAlXwu8H8bIS.png" alt="sample4"> </div> </td> </tr> </table> **Animagine XL 3.1** is an update in the Animagine XL V3 series, enhancing the previous version, Animagine XL 3.0. This open-source, anime-themed text-to-image model has been improved for generating anime-style images with higher quality. It includes a broader range of characters from well-known anime series, an optimized dataset, and new aesthetic tags for better image creation. Built on Stable Diffusion XL, Animagine XL 3.1 aims to be a valuable resource for anime fans, artists, and content creators by producing accurate and detailed representations of anime characters. ## Model Details - **Developed by**: [Cagliostro Research Lab](https://huggingface.co/cagliostrolab) - **In collaboration with**: [SeaArt.ai](https://www.seaart.ai/) - **Model type**: Diffusion-based text-to-image generative model - **Model Description**: Animagine XL 3.1 generates high-quality anime images from textual prompts. It boasts enhanced hand anatomy, improved concept understanding, and advanced prompt interpretation. - **License**: [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) - **Fine-tuned from**: [Animagine XL 3.0](https://huggingface.co/cagliostrolab/animagine-xl-3.0) ## Gradio & Colab Integration Try the demo powered by Gradio in Huggingface Spaces: [![Open In Spaces](https://img.shields.io/badge/๐Ÿค—-Open%20In%20Spaces-blue.svg)](https://huggingface.co/spaces/cagliostrolab/animagine-xl-3.1) Or open the demo in Google Colab: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/#fileId=https%3A//huggingface.co/spaces/cagliostrolab/animagine-xl-3.1/blob/main/demo.ipynb) ## ๐Ÿงจ Diffusers Installation First install the required libraries: ```bash pip install diffusers transformers accelerate safetensors --upgrade ``` Then run image generation with the following example code: ```python import torch from diffusers import DiffusionPipeline pipe = DiffusionPipeline.from_pretrained( "cagliostrolab/animagine-xl-3.1", torch_dtype=torch.float16, use_safetensors=True, ) pipe.to('cuda') prompt = "1girl, souryuu asuka langley, neon genesis evangelion, solo, upper body, v, smile, looking at viewer, outdoors, night" negative_prompt = "nsfw, lowres, (bad), text, error, fewer, extra, missing, worst quality, jpeg artifacts, low quality, watermark, unfinished, displeasing, oldest, early, chromatic aberration, signature, extra digits, artistic error, username, scan, [abstract]" image = pipe( prompt, negative_prompt=negative_prompt, width=832, height=1216, guidance_scale=7, num_inference_steps=28 ).images[0] image.save("./output/asuka_test.png") ``` ## Usage Guidelines ### Tag Ordering For optimal results, it's recommended to follow the structured prompt template because we train the model like this: ``` 1girl/1boy, character name, from what series, everything else in any order. ``` ## Special Tags Animagine XL 3.1 utilizes special tags to steer the result toward quality, rating, creation date and aesthetic. While the model can generate images without these tags, using them can help achieve better results. ### Quality Modifiers Quality tags now consider both scores and post ratings to ensure a balanced quality distribution. We've refined labels for greater clarity, such as changing 'high quality' to 'great quality'. | Quality Modifier | Score Criterion | |------------------|-------------------| | `masterpiece` | > 95% | | `best quality` | > 85% & โ‰ค 95% | | `great quality` | > 75% & โ‰ค 85% | | `good quality` | > 50% & โ‰ค 75% | | `normal quality` | > 25% & โ‰ค 50% | | `low quality` | > 10% & โ‰ค 25% | | `worst quality` | โ‰ค 10% | ### Rating Modifiers We've also streamlined our rating tags for simplicity and clarity, aiming to establish global rules that can be applied across different models. For example, the tag 'rating: general' is now simply 'general', and 'rating: sensitive' has been condensed to 'sensitive'. | Rating Modifier | Rating Criterion | |-------------------|------------------| | `safe` | General | | `sensitive` | Sensitive | | `nsfw` | Questionable | | `explicit, nsfw` | Explicit | ### Year Modifier We've also redefined the year range to steer results towards specific modern or vintage anime art styles more accurately. This update simplifies the range, focusing on relevance to current and past eras. | Year Tag | Year Range | |----------|------------------| | `newest` | 2021 to 2024 | | `recent` | 2018 to 2020 | | `mid` | 2015 to 2017 | | `early` | 2011 to 2014 | | `oldest` | 2005 to 2010 | ### Aesthetic Tags We've enhanced our tagging system with aesthetic tags to refine content categorization based on visual appeal. These tags are derived from evaluations made by a specialized ViT (Vision Transformer) image classification model, specifically trained on anime data. For this purpose, we utilized the model [shadowlilac/aesthetic-shadow-v2](https://huggingface.co/shadowlilac/aesthetic-shadow-v2), which assesses the aesthetic value of content before it undergoes training. This ensures that each piece of content is not only relevant and accurate but also visually appealing. | Aesthetic Tag | Score Range | |-------------------|-------------------| | `very aesthetic` | > 0.71 | | `aesthetic` | > 0.45 & < 0.71 | | `displeasing` | > 0.27 & < 0.45 | | `very displeasing`| โ‰ค 0.27 | ## Recommended settings To guide the model towards generating high-aesthetic images, use negative prompts like: ``` nsfw, lowres, (bad), text, error, fewer, extra, missing, worst quality, jpeg artifacts, low quality, watermark, unfinished, displeasing, oldest, early, chromatic aberration, signature, extra digits, artistic error, username, scan, [abstract] ``` For higher quality outcomes, prepend prompts with: ``` masterpiece, best quality, very aesthetic, absurdres ``` itโ€™s recommended to use a lower classifier-free guidance (CFG Scale) of around 5-7, sampling steps below 30, and to use Euler Ancestral (Euler a) as a sampler. ### Multi Aspect Resolution This model supports generating images at the following dimensions: | Dimensions | Aspect Ratio | |-------------------|-----------------| | `1024 x 1024` | 1:1 Square | | `1152 x 896` | 9:7 | | `896 x 1152` | 7:9 | | `1216 x 832` | 19:13 | | `832 x 1216` | 13:19 | | `1344 x 768` | 7:4 Horizontal | | `768 x 1344` | 4:7 Vertical | | `1536 x 640` | 12:5 Horizontal | | `640 x 1536` | 5:12 Vertical | ## Training and Hyperparameters **Animagine XL 3.1** was trained on 2x A100 80GB GPUs for approximately 15 days, totaling over 350 GPU hours. The training process consisted of three stages: - **Pretraining**: Utilized a data-rich collection of 870k ordered and tagged images to increase Animagine XL 3.0's model knowledge. - **Finetuning - First Stage**: Employed labeled and curated aesthetic datasets to refine the broken U-Net after pretraining. - **Finetuning - Second Stage**: Utilized labeled and curated aesthetic datasets to refine the model's art style and improve hand and anatomy rendering. ### Hyperparameters | Stage | Epochs | UNet lr | Train Text Encoder | Batch Size | Noise Offset | Optimizer | LR Scheduler | Grad Acc Steps | GPUs | |--------------------------|--------|---------|--------------------|------------|--------------|------------|-------------------------------|----------------|------| | **Pretraining** | 10 | 1e-5 | True | 16 | N/A | AdamW | Cosine Annealing Warm Restart | 3 | 2 | | **Finetuning 1st Stage** | 10 | 2e-6 | False | 48 | 0.0357 | Adafactor | Constant with Warmup | 1 | 1 | | **Finetuning 2nd Stage** | 15 | 1e-6 | False | 48 | 0.0357 | Adafactor | Constant with Warmup | 1 | 1 | ## Model Comparison (Pretraining only) ### Training Config | Configuration Item | Animagine XL 3.0 | Animagine XL 3.1 | |---------------------------------|------------------------------------------|------------------------------------------------| | **GPU** | 2 x A100 80G | 2 x A100 80G | | **Dataset** | 1,271,990 | 873,504 | | **Shuffle Separator** | True | True | | **Num Epochs** | 10 | 10 | | **Learning Rate** | 7.5e-6 | 1e-5 | | **Text Encoder Learning Rate** | 3.75e-6 | 1e-5 | | **Effective Batch Size** | 48 x 1 x 2 | 16 x 3 x 2 | | **Optimizer** | Adafactor | AdamW | | **Optimizer Args** | Scale Parameter: False, Relative Step: False, Warmup Init: False | Weight Decay: 0.1, Betas: (0.9, 0.99) | | **LR Scheduler** | Constant with Warmup | Cosine Annealing Warm Restart | | **LR Scheduler Args** | Warmup Steps: 100 | Num Cycles: 10, Min LR: 1e-6, LR Decay: 0.9, First Cycle Steps: 9,099 | Source code and training config are available here: https://github.com/cagliostrolab/sd-scripts/tree/main/notebook ### Acknowledgements The development and release of Animagine XL 3.1 would not have been possible without the invaluable contributions and support from the following individuals and organizations: - **[SeaArt.ai](https://www.seaart.ai/)**: Our collaboration partner and sponsor. - **[Shadow Lilac](https://huggingface.co/shadowlilac)**: For providing the aesthetic classification model, [aesthetic-shadow-v2](https://huggingface.co/shadowlilac/aesthetic-shadow-v2). - **[Derrian Distro](https://github.com/derrian-distro)**: For their custom learning rate scheduler, adapted from [LoRA Easy Training Scripts](https://github.com/derrian-distro/LoRA_Easy_Training_Scripts/blob/main/custom_scheduler/LoraEasyCustomOptimizer/CustomOptimizers.py). - **[Kohya SS](https://github.com/kohya-ss)**: For their comprehensive training scripts. - **Cagliostrolab Collaborators**: For their dedication to model training, project management, and data curation. - **Early Testers**: For their valuable feedback and quality assurance efforts. - **NovelAI**: For their innovative approach to aesthetic tagging, which served as an inspiration for our implementation. - **KBlueLeaf**: For providing inspiration in balancing quality tags distribution and managing tags based on [Hakubooru Metainfo](https://github.com/KohakuBlueleaf/HakuBooru/blob/main/hakubooru/metainfo.py) Thank you all for your support and expertise in pushing the boundaries of anime-style image generation. ## Collaborators - [Linaqruf](https://huggingface.co/Linaqruf) - [ItsMeBell](https://huggingface.co/ItsMeBell) - [Asahina2K](https://huggingface.co/Asahina2K) - [DamarJati](https://huggingface.co/DamarJati) - [Zwicky18](https://huggingface.co/Zwicky18) - [Scipius2121](https://huggingface.co/Scipius2121) - [Raelina](https://huggingface.co/Raelina) - [Kayfahaarukku](https://huggingface.co/kayfahaarukku) - [Kriz](https://huggingface.co/Kr1SsSzz) ## Limitations While Animagine XL 3.1 represents a significant advancement in anime-style image generation, it is important to acknowledge its limitations: 1. **Anime-Focused**: This model is specifically designed for generating anime-style images and is not suitable for creating realistic photos. 2. **Prompt Complexity**: This model may not be suitable for users who expect high-quality results from short or simple prompts. The training focus was on concept understanding rather than aesthetic refinement, which may require more detailed and specific prompts to achieve the desired output. 3. **Prompt Format**: Animagine XL 3.1 is optimized for Danbooru-style tags rather than natural language prompts. For best results, users are encouraged to format their prompts using the appropriate tags and syntax. 4. **Anatomy and Hand Rendering**: Despite the improvements made in anatomy and hand rendering, there may still be instances where the model produces suboptimal results in these areas. 5. **Dataset Size**: The dataset used for training Animagine XL 3.1 consists of approximately 870,000 images. When combined with the previous iteration's dataset (1.2 million), the total training data amounts to around 2.1 million images. While substantial, this dataset size may still be considered limited in scope for an "ultimate" anime model. 6. **NSFW Content**: Animagine XL 3.1 has been designed to generate more balanced NSFW content. However, it is important to note that the model may still produce NSFW results, even if not explicitly prompted. By acknowledging these limitations, we aim to provide transparency and set realistic expectations for users of Animagine XL 3.1. Despite these constraints, we believe that the model represents a significant step forward in anime-style image generation and offers a powerful tool for artists, designers, and enthusiasts alike. ## License Based on Animagine XL 3.0, Animagine XL 3.1 falls under [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) license, which is compatible with Stable Diffusion modelsโ€™ license. Key points: 1. **Modification Sharing:** If you modify Animagine XL 3.1, you must share both your changes and the original license. 2. **Source Code Accessibility:** If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too. 3. **Distribution Terms:** Any distribution must be under this license or another with similar rules. 4. **Compliance:** Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values. The choice of this license aims to keep Animagine XL 3.1 open and modifiable, aligning with open source community spirit. It protects contributors and users, encouraging a collaborative, ethical open-source community. This ensures the model not only benefits from communal input but also respects open-source development freedoms. ## Cagliostro Lab Discord Server Finally Cagliostro Lab Server open to public https://discord.gg/cqh9tZgbGc Feel free to join our discord server
Lucifer0612/myssd
Lucifer0612
2024-05-23T03:41:32Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-23T03:41:32Z
--- license: apache-2.0 ---
Aryan-401/gemma-does-math-2b-it
Aryan-401
2024-05-23T03:41:14Z
145
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma", "text-generation", "autotrain", "text-generation-inference", "peft", "conversational", "dataset:argilla/distilabel-math-preference-dpo", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T18:02:34Z
--- tags: - autotrain - text-generation-inference - text-generation - peft library_name: transformers widget: - messages: - role: user content: What is your favorite condiment? license: other datasets: - argilla/distilabel-math-preference-dpo --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
ronenh24/distilbert-base-uncased-finetuned-imdb
ronenh24
2024-05-23T03:35:26Z
107
1
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "fill-mask", "generated_from_trainer", "dataset:stanfordnlp/imdb", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-20T19:39:30Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] datasets: - stanfordnlp/imdb --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4894 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6819 | 1.0 | 157 | 2.4978 | | 2.5872 | 2.0 | 314 | 2.4488 | | 2.527 | 3.0 | 471 | 2.4823 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
ronenh24/bert-finetuned-squad
ronenh24
2024-05-23T03:34:37Z
124
1
transformers
[ "transformers", "pytorch", "bert", "question-answering", "dataset:rajpurkar/squad", "endpoints_compatible", "region:us" ]
question-answering
2024-05-22T17:31:17Z
--- datasets: - rajpurkar/squad ---
DokHee/JSLLMV7
DokHee
2024-05-23T03:31:34Z
76
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-23T03:19:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hbminsi/segformer-b0-finetuned-segments-sidewalk-2
hbminsi
2024-05-23T03:30:57Z
194
0
transformers
[ "transformers", "tensorboard", "safetensors", "segformer", "vision", "image-segmentation", "generated_from_trainer", "base_model:nvidia/mit-b0", "base_model:finetune:nvidia/mit-b0", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2024-05-23T03:16:55Z
--- license: other base_model: nvidia/mit-b0 tags: - vision - image-segmentation - generated_from_trainer model-index: - name: segformer-b0-finetuned-segments-sidewalk-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b0-finetuned-segments-sidewalk-2 This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset. It achieves the following results on the evaluation set: - Loss: 2.7084 - Mean Iou: 0.0871 - Mean Accuracy: 0.1451 - Overall Accuracy: 0.6167 - Accuracy Unlabeled: nan - Accuracy Flat-road: 0.5180 - Accuracy Flat-sidewalk: 0.9088 - Accuracy Flat-crosswalk: 0.0001 - Accuracy Flat-cyclinglane: 0.0259 - Accuracy Flat-parkingdriveway: 0.0 - Accuracy Flat-railtrack: nan - Accuracy Flat-curb: 0.0012 - Accuracy Human-person: 0.0017 - Accuracy Human-rider: 0.0 - Accuracy Vehicle-car: 0.9553 - Accuracy Vehicle-truck: 0.0 - Accuracy Vehicle-bus: 0.0 - Accuracy Vehicle-tramtrain: nan - Accuracy Vehicle-motorcycle: 0.0 - Accuracy Vehicle-bicycle: 0.0000 - Accuracy Vehicle-caravan: 0.0 - Accuracy Vehicle-cartrailer: 0.0 - Accuracy Construction-building: 0.4663 - Accuracy Construction-door: 0.0 - Accuracy Construction-wall: 0.0670 - Accuracy Construction-fenceguardrail: 0.0 - Accuracy Construction-bridge: 0.0 - Accuracy Construction-tunnel: nan - Accuracy Construction-stairs: 0.0 - Accuracy Object-pole: 0.0064 - Accuracy Object-trafficsign: 0.0 - Accuracy Object-trafficlight: 0.0 - Accuracy Nature-vegetation: 0.9708 - Accuracy Nature-terrain: 0.0024 - Accuracy Sky: 0.5740 - Accuracy Void-ground: 0.0 - Accuracy Void-dynamic: 0.0 - Accuracy Void-static: 0.0 - Accuracy Void-unclear: 0.0 - Iou Unlabeled: nan - Iou Flat-road: 0.3925 - Iou Flat-sidewalk: 0.6649 - Iou Flat-crosswalk: 0.0001 - Iou Flat-cyclinglane: 0.0249 - Iou Flat-parkingdriveway: 0.0 - Iou Flat-railtrack: 0.0 - Iou Flat-curb: 0.0012 - Iou Human-person: 0.0017 - Iou Human-rider: 0.0 - Iou Vehicle-car: 0.2851 - Iou Vehicle-truck: 0.0 - Iou Vehicle-bus: 0.0 - Iou Vehicle-tramtrain: 0.0 - Iou Vehicle-motorcycle: 0.0 - Iou Vehicle-bicycle: 0.0000 - Iou Vehicle-caravan: 0.0 - Iou Vehicle-cartrailer: 0.0 - Iou Construction-building: 0.3825 - Iou Construction-door: 0.0 - Iou Construction-wall: 0.0540 - Iou Construction-fenceguardrail: 0.0 - Iou Construction-bridge: 0.0 - Iou Construction-tunnel: 0.0 - Iou Construction-stairs: 0.0 - Iou Object-pole: 0.0048 - Iou Object-trafficsign: 0.0 - Iou Object-trafficlight: 0.0 - Iou Nature-vegetation: 0.6011 - Iou Nature-terrain: 0.0024 - Iou Sky: 0.5451 - Iou Void-ground: 0.0 - Iou Void-dynamic: 0.0 - Iou Void-static: 0.0 - Iou Void-unclear: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Flat-road | Accuracy Flat-sidewalk | Accuracy Flat-crosswalk | Accuracy Flat-cyclinglane | Accuracy Flat-parkingdriveway | Accuracy Flat-railtrack | Accuracy Flat-curb | Accuracy Human-person | Accuracy Human-rider | Accuracy Vehicle-car | Accuracy Vehicle-truck | Accuracy Vehicle-bus | Accuracy Vehicle-tramtrain | Accuracy Vehicle-motorcycle | Accuracy Vehicle-bicycle | Accuracy Vehicle-caravan | Accuracy Vehicle-cartrailer | Accuracy Construction-building | Accuracy Construction-door | Accuracy Construction-wall | Accuracy Construction-fenceguardrail | Accuracy Construction-bridge | Accuracy Construction-tunnel | Accuracy Construction-stairs | Accuracy Object-pole | Accuracy Object-trafficsign | Accuracy Object-trafficlight | Accuracy Nature-vegetation | Accuracy Nature-terrain | Accuracy Sky | Accuracy Void-ground | Accuracy Void-dynamic | Accuracy Void-static | Accuracy Void-unclear | Iou Unlabeled | Iou Flat-road | Iou Flat-sidewalk | Iou Flat-crosswalk | Iou Flat-cyclinglane | Iou Flat-parkingdriveway | Iou Flat-railtrack | Iou Flat-curb | Iou Human-person | Iou Human-rider | Iou Vehicle-car | Iou Vehicle-truck | Iou Vehicle-bus | Iou Vehicle-tramtrain | Iou Vehicle-motorcycle | Iou Vehicle-bicycle | Iou Vehicle-caravan | Iou Vehicle-cartrailer | Iou Construction-building | Iou Construction-door | Iou Construction-wall | Iou Construction-fenceguardrail | Iou Construction-bridge | Iou Construction-tunnel | Iou Construction-stairs | Iou Object-pole | Iou Object-trafficsign | Iou Object-trafficlight | Iou Nature-vegetation | Iou Nature-terrain | Iou Sky | Iou Void-ground | Iou Void-dynamic | Iou Void-static | Iou Void-unclear | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:------------------:|:----------------------:|:-----------------------:|:-------------------------:|:-----------------------------:|:-----------------------:|:------------------:|:---------------------:|:--------------------:|:--------------------:|:----------------------:|:--------------------:|:--------------------------:|:---------------------------:|:------------------------:|:------------------------:|:---------------------------:|:------------------------------:|:--------------------------:|:--------------------------:|:------------------------------------:|:----------------------------:|:----------------------------:|:----------------------------:|:--------------------:|:---------------------------:|:----------------------------:|:--------------------------:|:-----------------------:|:------------:|:--------------------:|:---------------------:|:--------------------:|:---------------------:|:-------------:|:-------------:|:-----------------:|:------------------:|:--------------------:|:------------------------:|:------------------:|:-------------:|:----------------:|:---------------:|:---------------:|:-----------------:|:---------------:|:---------------------:|:----------------------:|:-------------------:|:-------------------:|:----------------------:|:-------------------------:|:---------------------:|:---------------------:|:-------------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:---------------:|:----------------------:|:-----------------------:|:---------------------:|:------------------:|:-------:|:---------------:|:----------------:|:---------------:|:----------------:| | 2.9796 | 0.4 | 20 | 3.2289 | 0.0664 | 0.1236 | 0.5591 | nan | 0.2423 | 0.9160 | 0.0000 | 0.0110 | 0.0000 | nan | 0.0006 | 0.0021 | 0.0 | 0.9292 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0 | 0.0 | 0.2970 | 0.0 | 0.1131 | 0.0 | 0.0 | nan | 0.0 | 0.0502 | 0.0 | 0.0 | 0.9793 | 0.0015 | 0.2752 | 0.0 | 0.0148 | 0.0000 | 0.0 | 0.0 | 0.2077 | 0.6059 | 0.0000 | 0.0107 | 0.0000 | 0.0 | 0.0006 | 0.0020 | 0.0 | 0.2985 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.2620 | 0.0 | 0.0654 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0249 | 0.0 | 0.0 | 0.5724 | 0.0015 | 0.2711 | 0.0 | 0.0027 | 0.0000 | 0.0 | | 2.6315 | 0.8 | 40 | 2.7084 | 0.0871 | 0.1451 | 0.6167 | nan | 0.5180 | 0.9088 | 0.0001 | 0.0259 | 0.0 | nan | 0.0012 | 0.0017 | 0.0 | 0.9553 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0 | 0.0 | 0.4663 | 0.0 | 0.0670 | 0.0 | 0.0 | nan | 0.0 | 0.0064 | 0.0 | 0.0 | 0.9708 | 0.0024 | 0.5740 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.3925 | 0.6649 | 0.0001 | 0.0249 | 0.0 | 0.0 | 0.0012 | 0.0017 | 0.0 | 0.2851 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.3825 | 0.0 | 0.0540 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0048 | 0.0 | 0.0 | 0.6011 | 0.0024 | 0.5451 | 0.0 | 0.0 | 0.0 | 0.0 | ### Framework versions - Transformers 4.41.1 - Pytorch 1.13.1 - Datasets 2.19.1 - Tokenizers 0.19.1
houbw/llama3_ruozhiba_ori_8_up_5
houbw
2024-05-23T03:27:42Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct", "base_model:finetune:unsloth/llama-3-8b-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-23T03:26:51Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-Instruct --- # Uploaded model - **Developed by:** houbw - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RomBor/poca-SoccerTwos
RomBor
2024-05-23T03:18:16Z
14
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2024-05-23T03:18:10Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog ๐Ÿถ to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: RomBor/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play ๐Ÿ‘€
Pragash-Mohanarajah/roberta-base-finetuned-bible
Pragash-Mohanarajah
2024-05-23T03:16:48Z
138
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-22T22:49:04Z
--- license: mit base_model: roberta-base tags: - generated_from_trainer model-index: - name: roberta-base-finetuned-bible results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-bible This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0770 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.4542 | 1.0 | 2460 | 1.1916 | | 1.3233 | 2.0 | 4920 | 1.1017 | | 1.2945 | 3.0 | 7380 | 1.0770 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
benchang1110/Taiwan-tinyllama-v1.0-chat
benchang1110
2024-05-23T03:14:14Z
152
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "zh", "dataset:benchang1110/ChatTaiwan", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-21T11:49:42Z
--- language: - zh license: apache-2.0 datasets: - benchang1110/ChatTaiwan pipeline_tag: text-generation widget: - example_title: ็ฏ„ไพ‹ไธ€ messages: - role: user content: ไฝ ๅฅฝ --- ## Model Card for Model ID This model is the instruction finetuning version of [benchang1110/Taiwan-tinyllama-v1.0-base](https://huggingface.co/benchang1110/Taiwan-tinyllama-v1.0-base). ## Usage ```python import torch, transformers def generate_response(): model = transformers.AutoModelForCausalLM.from_pretrained("benchang1110/Taiwan-tinyllama-v1.0-chat", torch_dtype=torch.bfloat16, device_map=device,attn_implementation="flash_attention_2") tokenizer = transformers.AutoTokenizer.from_pretrained("benchang1110/Taiwan-tinyllama-v1.0-chat") streamer = transformers.TextStreamer(tokenizer,skip_prompt=True) while(1): prompt = input('USER:') if prompt == "exit": break print("Assistant: ") message = [ {'content': prompt, 'role': 'user'}, ] untokenized_chat = tokenizer.apply_chat_template(message,tokenize=False,add_generation_prompt=False) inputs = tokenizer.encode_plus(untokenized_chat, add_special_tokens=True, return_tensors="pt",return_attention_mask=True).to(device) outputs = model.generate(inputs["input_ids"],attention_mask=inputs['attention_mask'],streamer=streamer,use_cache=True,max_new_tokens=512,do_sample=True,temperature=0.1,repetition_penalty=1.2) if __name__ == '__main__': device = 'cuda' if torch.cuda.is_available() else 'cpu' generate_response() ```
khaingsmon/xho-1
khaingsmon
2024-05-23T03:11:41Z
78
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:ml-superb-subset", "base_model:Akashpb13/Swahili_xlsr", "base_model:finetune:Akashpb13/Swahili_xlsr", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-23T01:24:45Z
--- license: apache-2.0 base_model: Akashpb13/Swahili_xlsr tags: - generated_from_trainer datasets: - ml-superb-subset model-index: - name: xho-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xho-1 This model is a fine-tuned version of [Akashpb13/Swahili_xlsr](https://huggingface.co/Akashpb13/Swahili_xlsr) on the ml-superb-subset dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Andyhahaho/distilbert-base-uncased-finetuned-adl_hw1
Andyhahaho
2024-05-23T03:09:37Z
119
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-22T12:10:50Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-adl_hw1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-adl_hw1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2325 - Accuracy: 0.0003 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 4.2678 | 1.0 | 938 | 1.5396 | 0.0 | | 0.9511 | 2.0 | 1876 | 0.3407 | 0.0 | | 0.16 | 3.0 | 2814 | 0.2027 | 0.0 | | 0.0492 | 4.0 | 3752 | 0.1910 | 0.0 | | 0.0227 | 5.0 | 4690 | 0.1803 | 0.0 | | 0.0142 | 6.0 | 5628 | 0.2025 | 0.0 | | 0.014 | 7.0 | 6566 | 0.2010 | 0.0 | | 0.0064 | 8.0 | 7504 | 0.2267 | 0.0 | | 0.0076 | 9.0 | 8442 | 0.2312 | 0.0 | | 0.0065 | 10.0 | 9380 | 0.2257 | 0.0 | | 0.0051 | 11.0 | 10318 | 0.2285 | 0.0 | | 0.003 | 12.0 | 11256 | 0.2325 | 0.0003 | | 0.0031 | 13.0 | 12194 | 0.2582 | 0.0 | | 0.0009 | 14.0 | 13132 | 0.2445 | 0.0 | | 0.0012 | 15.0 | 14070 | 0.2511 | 0.0 | | 0.0006 | 16.0 | 15008 | 0.2568 | 0.0 | | 0.0002 | 17.0 | 15946 | 0.2586 | 0.0 | | 0.0002 | 18.0 | 16884 | 0.2620 | 0.0 | | 0.0001 | 19.0 | 17822 | 0.2606 | 0.0 | | 0.0001 | 20.0 | 18760 | 0.2631 | 0.0 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Pm06/sd-v1-5-finetuned
Pm06
2024-05-23T03:07:25Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-23T01:34:37Z
--- license: apache-2.0 ---
jan-hq/Meta-Llama-3-8B-Instruct-resized
jan-hq
2024-05-23T02:59:27Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T02:01:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MILVLG/Imp-v1.5-3B-196
MILVLG
2024-05-23T02:57:02Z
116
0
transformers
[ "transformers", "safetensors", "imp", "text-generation", "custom_code", "dataset:liuhaotian/LLaVA-Pretrain", "dataset:liuhaotian/LLaVA-Instruct-150K", "arxiv:2405.12107", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2024-05-19T05:58:47Z
--- license: apache-2.0 pipeline_tag: text-generation datasets: - liuhaotian/LLaVA-Pretrain - liuhaotian/LLaVA-Instruct-150K --- # ๐Ÿ˜ˆ Imp ## Introduction Based on [`Imp-v1.5-3B-phi2`](https://huggingface.co/MILVLG/Imp-v1.5-3B-Phi2), we reduce the resolution of the input image from 384 to 196, and retrain the model using the same settings to obtain `Imp-v1.5-3B-196` ## License This project is licensed under the Apache License 2.0 - see the [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) file for details. ## Citation If you use our model or refer our work in your studies, please cite: ```bibtex @article{imp2024, title={Imp: Highly Capable Large Multimodal Models for Mobile Devices}, author={Shao, Zhenwei and Yu, Zhou and Yu, Jun and Ouyang, Xuecheng and Zheng, Lihao and Gai, Zhenbiao and Wang, Mingyang and Ding, Jiajun}, journal={arXiv preprint arXiv:2405.12107}, year={2024} } ```
thesven/Hermes-2-Theta-Llama-3-8B-GPTQ
thesven
2024-05-23T02:52:23Z
80
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-05-23T02:36:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
h104/dippy_1049
h104
2024-05-23T02:50:44Z
76
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T02:49:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
h104/dippy_1016
h104
2024-05-23T02:43:46Z
75
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T02:16:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rabbitcat/DataSmith-6b
rabbitcat
2024-05-23T02:39:39Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-21T09:44:09Z
--- license: apache-2.0 --- # DataSmith ## Introduction DataSmith is a large model designed to generate JSON-format data from textual content. The DataSmith-6B version, equipped with 6 billion parameters, is fine-tuned using a comprehensive selection of data sources, including news, encyclopedias, legal documents, medical records, advertising, academic papers, books, novels, and various public announcements. This model serves as the foundation for a series of task-specific adaptations. ## Models Available - DataSmith-6B - Hugging Face Model Hub: [DataSmith-6B](https://huggingface.co/rabbitcat/DataSmith-6b) - github๏ผš[DataSmith](https://github.com/element-factory/DataSmith) ## Usage You can use the model directly or load it with device and dtype settings. The following is an example of generating questions and answers based on text content. You also can use `quick_start_demo.py` to generate question and answer pairs based on text content. ```python # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("rabbitcat/DataSmith-6b") # Load model with device and dtype settings model = AutoModelForCausalLM.from_pretrained( "rabbitcat/DataSmith-6b", device_map="auto", torch_dtype='auto' ).eval() # Generate prompt and text prompt = "่ฏปๅ–ไปฅไธ‹ๆ–‡ๆœฌๆๆ–™๏ผŒๅนถๆ นๆฎๆๆ–™็”Ÿๆˆ้—ฎ้ข˜๏ผŒไปฅๅŠ้—ฎ้ข˜็š„็ญ”ๆกˆใ€‚้—ฎ้ข˜ไธๅบ”่ฏฅๆ˜ฏๅผ€ๆ”พๅผ็š„๏ผŒๅบ”่ฏฅ่ƒฝๅคŸ้€š่ฟ‡ๆๆ–™ๅ›ž็ญ”ใ€‚้—ฎ้ข˜็š„็ญ”ๆกˆๅบ”่ฏฅๅœจๆๆ–™ไธญ่กจ่ฟฐๆˆ–ๆš—็คบใ€‚้—ฎ้ข˜ๅบ”่ฏฅไธŽๆๆ–™็›ธๅ…ณ๏ผŒไธๅบ”่ฏฅๅคชๅ…ทไฝ“ๆˆ–ๅคชๆ™ฎ้ใ€‚่พ“ๅ‡บๅบ”ไธบjsonๆ ผๅผใ€‚\n" text = "ๆ–‡ๆœฌๆๆ–™๏ผš\nๅŒ—ไบฌๅธ‚ๅœฐๅค„ไธญๅ›ฝๅŒ—้ƒจใ€ๅŽๅŒ—ๅนณๅŽŸๅŒ—้ƒจ๏ผŒไธœไธŽๅคฉๆดฅๅธ‚ๆฏ—่ฟž๏ผŒๅ…ถไฝ™ๅ‡ไธŽๆฒณๅŒ—็œ็›ธ้‚ป๏ผŒไธญๅฟƒไฝไบŽไธœ็ป116ยฐ20โ€ฒใ€ๅŒ—็บฌ39ยฐ56โ€ฒ๏ผŒๅŒ—ไบฌๅธ‚ๅœฐๅŠฟ่ฅฟๅŒ—้ซ˜ใ€ไธœๅ—ไฝŽใ€‚่ฅฟ้ƒจใ€ๅŒ—้ƒจๅ’ŒไธœๅŒ—้ƒจไธ‰้ข็Žฏๅฑฑ๏ผŒไธœๅ—้ƒจๆ˜ฏไธ€็‰‡็ผ“็ผ“ๅ‘ๆธคๆตทๅ€พๆ–œ็š„ๅนณๅŽŸใ€‚ๅขƒๅ†…ๆต็ป็š„ไธป่ฆๆฒณๆตๆœ‰๏ผšๆฐธๅฎšๆฒณใ€ๆฝฎ็™ฝๆฒณใ€ๅŒ—่ฟๆฒณใ€ๆ‹’้ฉฌๆฒณ็ญ‰๏ผŒๅŒ—ไบฌๅธ‚็š„ๆฐ”ๅ€™ไธบๆš–ๆธฉๅธฆๅŠๆนฟๆถฆๅŠๅนฒๆ—ฑๅญฃ้ฃŽๆฐ”ๅ€™๏ผŒๅคๅญฃ้ซ˜ๆธฉๅคš้›จ๏ผŒๅ†ฌๅญฃๅฏ’ๅ†ทๅนฒ็‡ฅ๏ผŒๆ˜ฅใ€็ง‹็Ÿญไฟƒใ€‚" # Generate messages for the model messages = [ { "role": "user", "content": prompt + text } ] # Tokenize and generate response input_ids = tokenizer.apply_chat_template( conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt' ) output_ids = model.generate( input_ids.to('cuda'), max_new_tokens=512, ) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) print(response) # Model response: # [ # {"question":"ๅŒ—ไบฌๅธ‚ไฝไบŽไธญๅ›ฝ็š„ๅ“ชไธชๆ–นไฝ๏ผŸ", "answer":"ๅŒ—ไบฌๅธ‚ๅœฐๅค„ไธญๅ›ฝๅŒ—้ƒจใ€ๅŽๅŒ—ๅนณๅŽŸๅŒ—้ƒจใ€‚"}, # {"question":"ๅŒ—ไบฌๅธ‚ไธœไธดๅ“ชไธชๅŸŽๅธ‚๏ผŸ", "answer":"ๅŒ—ไบฌๅธ‚ไธœไธŽๅคฉๆดฅๅธ‚ๆฏ—่ฟžใ€‚"}, # {"question":"ๅŒ—ไบฌๅธ‚ๅ‘จ่พนไธŽๅ“ชไธช็œไปฝ็›ธ้‚ป๏ผŸ", "answer":"ๅŒ—ไบฌๅธ‚ๅ…ถไฝ™ๅ‡ไธŽๆฒณๅŒ—็œ็›ธ้‚ปใ€‚"}, # {"question":"ๅŒ—ไบฌๅธ‚็š„ไธญๅฟƒไฝ็ฝฎๆ˜ฏๆ€Žๆ ท็š„๏ผŸ", "answer":"ๅŒ—ไบฌๅธ‚ไธญๅฟƒไฝไบŽไธœ็ป116ยฐ20โ€ฒใ€ๅŒ—็บฌ39ยฐ56โ€ฒใ€‚"}, # {"question":"ๅŒ—ไบฌๅธ‚ๅœฐๅŠฟ็š„ๆ€ปไฝ“็‰นๅพๆ˜ฏไป€ไนˆ๏ผŸ", "answer":"ๅŒ—ไบฌๅธ‚ๅœฐๅŠฟ่ฅฟๅŒ—้ซ˜ใ€ไธœๅ—ไฝŽใ€‚"}, # {"question":"ๅŒ—ไบฌๅธ‚่ฅฟ้ƒจใ€ๅŒ—้ƒจๅ’ŒไธœๅŒ—้ƒจไธ‰้ข่ขซไป€ไนˆ็Žฏ็ป•๏ผŸ", "answer":"ๅŒ—ไบฌๅธ‚่ฅฟ้ƒจใ€ๅŒ—้ƒจๅ’ŒไธœๅŒ—้ƒจไธ‰้ข็Žฏๅฑฑใ€‚"}, # {"question":"ๅŒ—ไบฌๅธ‚ไธœๅ—้ƒจๆ˜ฏไป€ไนˆๅœฐๅฝข๏ผŸ", "answer":"ๅŒ—ไบฌๅธ‚ไธœๅ—้ƒจๆ˜ฏไธ€็‰‡็ผ“็ผ“ๅ‘ๆธคๆตทๅ€พๆ–œ็š„ๅนณๅŽŸใ€‚"}, # {"question":"ๅŒ—ไบฌๅธ‚ๅขƒๅ†…ไธป่ฆๆต็ปๅ“ชไบ›ๆฒณๆต๏ผŸ", "answer":"ๅŒ—ไบฌๅธ‚ๅขƒๅ†…ๆต็ป็š„ไธป่ฆๆฒณๆตๆœ‰ๆฐธๅฎšๆฒณใ€ๆฝฎ็™ฝๆฒณใ€ๅŒ—่ฟๆฒณใ€ๆ‹’้ฉฌๆฒณ็ญ‰ใ€‚"}, # {"question":"ๅŒ—ไบฌๅธ‚็š„ๆฐ”ๅ€™็ฑปๅž‹ๆ˜ฏไป€ไนˆ๏ผŸ", "answer":"ๅŒ—ไบฌๅธ‚็š„ๆฐ”ๅ€™ไธบๆš–ๆธฉๅธฆๅŠๆนฟๆถฆๅŠๅนฒๆ—ฑๅญฃ้ฃŽๆฐ”ๅ€™ใ€‚"}, # {"question":"ๅŒ—ไบฌๅธ‚ๅ“ชไธชๅญฃ่Š‚็š„ๆฐ”ๆธฉๆœ€้ซ˜๏ผŸ", "answer":"ๅŒ—ไบฌๅธ‚ๅคๅญฃ็š„ๆฐ”ๆธฉๆœ€้ซ˜ใ€‚"}, # {"question":"ๅŒ—ไบฌๅธ‚ๅ“ชไธชๅญฃ่Š‚้™้›จ้‡ๆœ€ๅคš๏ผŸ", "answer":"ๅŒ—ไบฌๅธ‚ๅคๅญฃ้™้›จ้‡ๆœ€ๅคšใ€‚"}, # {"question":"ๅŒ—ไบฌๅธ‚ๅ“ชไธชๅญฃ่Š‚ๆฐ”ๅ€™ๆœ€ๅฏ’ๅ†ท๏ผŸ", "answer":"ๅŒ—ไบฌๅธ‚ๅ†ฌๅญฃๆฐ”ๅ€™ๆœ€ๅฏ’ๅ†ทใ€‚"}, # {"question":"ๅŒ—ไบฌๅธ‚ๅ“ชไธชๅญฃ่Š‚็ง‹้ซ˜ๆฐ”็ˆฝ๏ผŸ", "answer":"ๅŒ—ไบฌๅธ‚็ง‹ๅญฃๆฐ”ๅ€™็ง‹้ซ˜ๆฐ”็ˆฝใ€‚"}, # {"question":"ๅŒ—ไบฌๅธ‚ๅ“ชไธชๅญฃ่Š‚ๆ˜ฅๆš–่Šฑๅผ€๏ผŸ", "answer":"ๅŒ—ไบฌๅธ‚ๆ˜ฅๅญฃๆฐ”ๅ€™ๆ˜ฅๆš–่Šฑๅผ€ใ€‚"}, # {"question":"ๅŒ—ไบฌๅธ‚ๆ˜ฅๅญฃๅ’Œ็ง‹ๅญฃๅˆ†ๅˆซๆŒ็ปญๅคš้•ฟๆ—ถ้—ด๏ผŸ", "answer":"ๅŒ—ไบฌๅธ‚ๆ˜ฅใ€็ง‹็Ÿญไฟƒใ€‚"} # ] ``` ## Datasets We use gpt-4 to generate training corpus by constructing prompt. If you need it, please contact us by email. ## Contributing Our team has two contributors, and we are looking for more contributors to join us. You can contribute in several ways: 1. Open an issue 2. Contact us by email ## Contact Us - Email: - [email protected] - [email protected]
nannnzk/llama-loki-002
nannnzk
2024-05-23T02:32:59Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T02:26:31Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
EthanRhys/Red-WarioWare
EthanRhys
2024-05-23T02:29:35Z
0
0
null
[ "license:openrail++", "region:us" ]
null
2024-05-23T02:28:46Z
--- license: openrail++ ---
MILVLG/Imp-v1.5-3B-196-q4f16_1-MLC
MILVLG
2024-05-23T02:23:50Z
0
0
null
[ "text-generation", "dataset:liuhaotian/LLaVA-Pretrain", "dataset:liuhaotian/LLaVA-Instruct-150K", "arxiv:2405.12107", "license:apache-2.0", "region:us" ]
text-generation
2024-05-20T04:20:38Z
--- license: apache-2.0 pipeline_tag: text-generation datasets: - liuhaotian/LLaVA-Pretrain - liuhaotian/LLaVA-Instruct-150K --- # ๐Ÿ˜ˆ Imp ## Introduction To fit the [MLC](https://github.com/mlc-ai/mlc-llm) framework for mobile devices, we further perform the 4-bit quantization to [`Imp-v1.5-3B-196`](https://huggingface.co/MILVLG/Imp-v1.5-3B-196-q4f16_1-MLC) to obtain `Imp-v1.5-3B-196-q4f16_1-MLC`. To use this model on moblie devices, please refer to the [mlc-imp](https://github.com/MILVLG/mlc-imp) project. ## License This project is licensed under the Apache License 2.0 - see the [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) file for details. ## Citation If you use our model or refer our work in your studies, please cite: ```bibtex @article{imp2024, title={Imp: Highly Capable Large Multimodal Models for Mobile Devices}, author={Shao, Zhenwei and Yu, Zhou and Yu, Jun and Ouyang, Xuecheng and Zheng, Lihao and Gai, Zhenbiao and Wang, Mingyang and Ding, Jiajun}, journal={arXiv preprint arXiv:2405.12107}, year={2024} } ```
sanxialiuzhan/llama3-lora-openIE
sanxialiuzhan
2024-05-23T02:23:13Z
1
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "region:us" ]
null
2024-05-22T15:18:20Z
--- license: other library_name: peft tags: - llama-factory - lora - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B-Instruct model-index: - name: sft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sft This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the duie dataset. It achieves the following results on the evaluation set: - Loss: 0.0501 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 24 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 2 - total_train_batch_size: 96 - total_eval_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 0.1 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.074 | 0.16 | 500 | 0.0621 | | 0.0625 | 0.31 | 1000 | 0.0562 | | 0.0581 | 0.47 | 1500 | 0.0543 | | 0.0626 | 0.62 | 2000 | 0.0530 | | 0.0597 | 0.78 | 2500 | 0.0524 | | 0.0619 | 0.93 | 3000 | 0.0500 | | 0.0445 | 1.09 | 3500 | 0.0499 | | 0.0501 | 1.25 | 4000 | 0.0492 | | 0.0487 | 1.4 | 4500 | 0.0490 | | 0.0501 | 1.56 | 5000 | 0.0485 | | 0.0516 | 1.71 | 5500 | 0.0472 | | 0.0458 | 1.87 | 6000 | 0.0468 | | 0.0381 | 2.03 | 6500 | 0.0482 | | 0.037 | 2.18 | 7000 | 0.0506 | | 0.0387 | 2.34 | 7500 | 0.0501 | | 0.0363 | 2.49 | 8000 | 0.0498 | | 0.0321 | 2.65 | 8500 | 0.0500 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
xkiwilabs/lora_opComms_LLama3_v4
xkiwilabs
2024-05-23T02:23:02Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-23T02:22:46Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** xkiwilabs - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
houbw/llama3_ruozhiba_ori_8_up_4
houbw
2024-05-23T02:17:37Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct", "base_model:finetune:unsloth/llama-3-8b-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-23T02:17:08Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-Instruct --- # Uploaded model - **Developed by:** houbw - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
saad17g/finetuned_T5_amzn_v3
saad17g
2024-05-23T02:17:18Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-22T18:20:56Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: google-t5/t5-small model-index: - name: finetuned_T5_amzn_v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_T5_amzn_v3 This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.41.0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.19.1
acsankar/InsuranceGPT-16bit
acsankar
2024-05-23T02:13:03Z
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T02:03:42Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yimingzhang/zephyr-7b-gemma-sft
yimingzhang
2024-05-23T02:11:30Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma", "text-generation", "alignment-handbook", "trl", "sft", "generated_from_trainer", "conversational", "dataset:yimingzhang/backtrack-0522", "base_model:google/gemma-7b", "base_model:finetune:google/gemma-7b", "license:gemma", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T02:02:41Z
--- license: gemma base_model: google/gemma-7b tags: - alignment-handbook - trl - sft - generated_from_trainer - trl - sft - generated_from_trainer datasets: - yimingzhang/backtrack-0522 model-index: - name: zephyr-7b-gemma-sft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/wandbruh/huggingface/runs/mmxq7ysi) # zephyr-7b-gemma-sft This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the yimingzhang/backtrack-0522 dataset. It achieves the following results on the evaluation set: - Loss: 24.1747 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 63.281 | 0.8571 | 3 | 31.5271 | | 53.5176 | 2.0 | 7 | 24.9493 | | 53.5176 | 2.5714 | 9 | 24.1747 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
GenTrendGPT/ModelType-IV2
GenTrendGPT
2024-05-23T02:08:08Z
141
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T02:07:06Z
--- base_model: - TinyLlama/TinyLlama-1.1B-Chat-v1.0 library_name: transformers tags: - mergekit - merge --- # ModelType-IV2 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: float16 merge_method: passthrough slices: - sources: - layer_range: [0, 20] model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 - sources: - layer_range: [0, 20] model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 ```
ahmedsqrd/olmo-7b-hf-4b
ahmedsqrd
2024-05-23T01:58:47Z
3
0
transformers
[ "transformers", "safetensors", "olmo", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T01:47:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
apple9855/code-search-net-tokenizer
apple9855
2024-05-23T01:51:14Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-23T01:51:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gaodrew/phi-3-mini-4k-instruct-midjourney-autoprompter
gaodrew
2024-05-23T01:50:01Z
0
1
null
[ "safetensors", "unsloth", "opensource", "phi", "license:mit", "region:us" ]
null
2024-05-19T06:12:49Z
--- license: mit tags: - unsloth - opensource - phi --- <img src="https://cloud-3i4ld6u5y-hack-club-bot.vercel.app/0home.png" alt="Akash Network logo" width="200"/> Thank you to the [Akash Network](https://akash.network/) for sponsoring this project and providing A100s/H100s for compute! <a target="_blank" href="https://colab.research.google.com/github/andrewgcodes/autoprompter/blob/main/run_autoprompter.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> # Overview Writing good AI art prompts for Midjourney, Stable Diffusion, etcetera takes time and practice. I fine-tuned a small language model (Phi-3) to help you improve your prompts. Fine-tuned version of unquantized [unsloth/Phi-3-mini-4k-instruct](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct) using Unsloth on ~100,000 high quality Midjourney AI art prompts This Hugging Face repo contains adapter weights only (you need to load on top of the base Phi-3 weights) <img src="https://cloud-5rbe0uczw-hack-club-bot.vercel.app/0screenshot_2024-05-20_at_3.30.18___pm.png" alt="prompt format" width="400"/> # Inference Recommended inference settings: repetition_penalty = 1.2, temperature = 0.5-1.0 Adjust if the model starts repeating itself. To run (Colab T4 GPU works): <a target="_blank" href="https://colab.research.google.com/github/andrewgcodes/autoprompter/blob/main/run_autoprompter.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> # Fine-tuning Details I used this [reference code](https://medium.com/@mauryaanoop3/fine-tuning-phi-3-with-unsloth-for-superior-performance-on-custom-data-2c14b3c1e90b) from Anoop Maurya. After experimenting with various settings and parameters, this is what I settled on: - Max seq length: 128 (few prompts are longer than 128 tokens, at this point you probably get diminishing returns on image quality) - fine-tuned on unquantized base weights - LoRA: R=32, Alpha=32 - Batch size: 32 - Epochs: 1 - Gradient accumulation steps: 4 - Warmup steps: 100 - Learning rate: 1e4 - Optimizer: adamw_8bit I used an H100 GPU from the Akash Network. # Dataset Please see [gaodrew/midjourney-prompts-highquality](https://huggingface.co/datasets/gaodrew/midjourney-prompts-highquality) # Limitations The model is prone to listing out adjectives. For example: "kitten, cute kitten with a big smile on its face and fluffy fur. The background is filled with colorful flowers in various shades of pink, purple, blue, yellow, orange, green, red, white, black, brown, gray, gold, silver, bronze, copper, brass, steel, aluminum, titanium, platinum, diamond"
GenTrendGPT/ModelType-IV
GenTrendGPT
2024-05-23T01:46:10Z
141
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T01:45:17Z
--- base_model: - TinyLlama/TinyLlama-1.1B-Chat-v1.0 library_name: transformers tags: - mergekit - merge --- # ModelType-IV This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: float16 merge_method: passthrough slices: - sources: - layer_range: [0, 20] model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 - sources: - layer_range: [0, 20] model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 ```
firqaaa/indo-gemma-2b-alpaca
firqaaa
2024-05-23T01:40:13Z
121
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/gemma-2b-bnb-4bit", "base_model:finetune:unsloth/gemma-2b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-03-03T14:02:40Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl base_model: unsloth/gemma-2b-bnb-4bit pipeline_tag: text-generation --- ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Context Length Models are trained on a context length of 8192 tokens. ### How to use ```python # Prompt alpaca_prompt = """Di bawah ini adalah instruksi yang menjelaskan tugas, dipasangkan dengan masukan yang memberikan konteks lebih lanjut. Tulis tanggapan yang melengkapi instruksi dengan tepat. ### Instruksi: {} ### Masukan: {} ### Tanggapan: {}""" max_seq_length = 4096 # Choose any! We auto support RoPE Scaling internally! dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+ load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False. if True: from unsloth import FastLanguageModel model, tokenizer = FastLanguageModel.from_pretrained( model_name = "indo-gemma-2b-alpaca", max_seq_length = max_seq_length, dtype = dtype, load_in_4bit = load_in_4bit ) FastLanguageModel.for_inference(model) # Enable native 2x faster inference inputs = tokenizer( [ alpaca_prompt.format( "Sebutkan langkah-langkah membuat nasi goreng!", "", # input "", # output - leave this blank for generation! ) ], return_tensors = "pt" ).to("cuda") from transformers import TextStreamer text_streamer = TextStreamer(tokenizer) _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 256) ``` ### Uploaded model - **Developed by:** firqaaa - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
InnerI/synCAI-144k-gpt2.5
InnerI
2024-05-23T01:39:20Z
10
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-29T02:11:48Z
--- license: mit --- # โš  CONSTRUCTION GOING # synCAI-144k-gpt-2.5 ![synCAI-144k-gpt-2.5](https://i.arxius.io/b39a96c0.webp) ## Overview synCAI-144k-gpt-2.5 is a large language model designed to advance AI and consciousness studies. This model is fine-tuned on the InnerI/synCAI_144kda dataset, which contains 144,000 synthetic data points focused on consciousness-related topics. ## Training Dataset The dataset used for fine-tuning is InnerI/synCAI_144kda, available at [InnerI/synCAI_144kda](https://huggingface.co/datasets/InnerI/synCAI_144kda). It includes: - **10,000 Unique Rows**: Diverse questions and responses designed to advance AI and consciousness studies. - **144,000 Synthetic Rows**: Additional data from Mostly AI, providing a comprehensive training dataset. - also at Mostly AI - https://app.mostly.ai/d/synthetic-datasets/992ddc63-7059-4bb8-8dd8-a7eb2dc7a579 ## Intended Use synCAI-144k-gpt-2.5 is intended for AI applications in consciousness studies and large-scale AI tasks. Potential use cases include: - Generating responses to questions about consciousness, covering philosophical, neuroscientific, and quantum topics. - Assisting in AI-based consciousness research and analysis. - Supporting AI training and development with a focus on consciousness-related tasks. ## Model Capabilities synCAI-144k-gpt-2.5 can: - Generate responses to questions about consciousness, drawing from a diverse dataset. - Assist in training AI models for consciousness studies and related applications. - Support AI-based analysis and research in fields focusing on consciousness. ## Licensing and Usage Ensure compliance with any licensing agreements or usage restrictions when using this model. It is intended for academic and research purposes. If you use or share the model, provide appropriate attribution. ### Contributing Contributions to the model are welcome. If you have suggestions for improvements or additional use cases, consider submitting them for review and inclusion. ## Contact Information For further information about the model or additional questions, please contact [@innerinetco](https://x.com/innerinetco)
datek/gemma-2b-flock-1716428178
datek
2024-05-23T01:38:52Z
142
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T01:36:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
euunn/0523_llama2
euunn
2024-05-23T01:33:12Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-23T01:32:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dana2002/latest-finetuned
dana2002
2024-05-23T01:31:07Z
123
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-22T14:18:16Z
--- license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer metrics: - wer model-index: - name: latest-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # latest-finetuned This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0520 - Wer: 10.0274 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 3000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0309 | 2.03 | 1000 | 0.0481 | 12.1580 | | 0.0054 | 4.07 | 2000 | 0.0485 | 10.2808 | | 0.0013 | 6.1 | 3000 | 0.0520 | 10.0274 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
hgnoi/cyO1jWNJQGYnw9v6
hgnoi
2024-05-23T01:28:55Z
127
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T01:27:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hgnoi/DDZwEl9tFFDkz6Vh
hgnoi
2024-05-23T01:28:37Z
128
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T01:27:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
FernandoD95/poca-SoccerTwos
FernandoD95
2024-05-23T01:21:54Z
10
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "ML-Agents", "region:us" ]
reinforcement-learning
2024-05-23T01:00:48Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos - ML-Agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog ๐Ÿถ to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: FernandoD95/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play ๐Ÿ‘€
T3Q-LLM/T3Q-LLM-sft1.0-dpo1.0
T3Q-LLM
2024-05-23T01:17:24Z
2,249
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:maywell/ko_Ultrafeedback_binarized", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-17T12:31:29Z
--- library_name: transformers license: apache-2.0 pipeline_tag: text-generation datasets: - maywell/ko_Ultrafeedback_binarized base model: - yanolja/EEVE-Korean-Instruct-10.8B-v1.0 --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f22e4076fedc4fd11e978f/MoTedec_ZL8GM2MmGyAPs.png) # T3Q-LLM-sft1.0-dpo1.0 ## This model is a version of T3Q-LLM/T3Q-LLM-solar10.8-sft-v1.0 that has been fine-tuned with DPO. ## Model Developers Chihoon Lee(chihoonlee10), T3Q ## Prompt Template ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. Human: {prompt} Assistant: ``` ## How to Use it ```python from transformers import AutoTokenizer from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("T3Q-LLM/T3Q-LLM-sft1.0-dpo1.0") tokenizer = AutoTokenizer.from_pretrained("T3Q-LLM/T3Q-LLM-sft1.0-dpo1.0") prompt_template = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\nHuman: {prompt}\nAssistant:\n" text = 'ํ•œ๊ตญ์˜ ์ˆ˜๋„๋Š” ์–ด๋””์ธ๊ฐ€์š”? ์•„๋ž˜ ์„ ํƒ์ง€ ์ค‘ ๊ณจ๋ผ์ฃผ์„ธ์š”.\n\n(A) ๊ฒฝ์„ฑ\n(B) ๋ถ€์‚ฐ\n(C) ํ‰์–‘\n(D) ์„œ์šธ\n(E) ์ „์ฃผ' model_inputs = tokenizer(prompt_template.format(prompt=text), return_tensors='pt') outputs = model.generate(**model_inputs, max_new_tokens=256) output_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0] print(output_text) ``` ### Example Output ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. Human: ํ•œ๊ตญ์˜ ์ˆ˜๋„๋Š” ์–ด๋””์ธ๊ฐ€์š”? ์•„๋ž˜ ์„ ํƒ์ง€ ์ค‘ ๊ณจ๋ผ์ฃผ์„ธ์š”. (A) ๊ฒฝ์„ฑ (B) ๋ถ€์‚ฐ (C) ํ‰์–‘ (D) ์„œ์šธ (E) ์ „์ฃผ Assistant: (D) ์„œ์šธ์ด ํ•œ๊ตญ์˜ ์ˆ˜๋„์ž…๋‹ˆ๋‹ค. ์„œ์šธ์€ ๋‚˜๋ผ์˜ ๋ถ๋™๋ถ€์— ์œ„์น˜ํ•ด ์žˆ์œผ๋ฉฐ, ์ •์น˜, ๊ฒฝ์ œ, ๋ฌธํ™”์˜ ์ค‘์‹ฌ์ง€์ž…๋‹ˆ๋‹ค. ์•ฝ 1,000๋งŒ ๋ช…์ด ๋„˜๋Š” ์ธ๊ตฌ๋ฅผ ๊ฐ€์ง„ ์„ธ๊ณ„์—์„œ ๊ฐ€์žฅ ํฐ ๋„์‹œ ์ค‘ ํ•˜๋‚˜์ž…๋‹ˆ๋‹ค. ์„œ์šธ์€ ๋†’์€ ๋นŒ๋”ฉ, ํ˜„๋Œ€์ ์ธ ์ธํ”„๋ผ, ํ™œ๊ธฐ ๋ฌธํ™” ์žฅ๋ฉด์œผ๋กœ ์œ ๋ช…ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, ๋งŽ์€ ์—ญ์‚ฌ์  ๋ช…์†Œ์™€ ๋ฐ•๋ฌผ๊ด€์ด ์žˆ์–ด ๋ฐฉ๋ฌธ๊ฐ๋“ค์—๊ฒŒ ํ’๋ถ€ํ•œ ๋ฌธํ™” ์ฒดํ—˜์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ``` | Task |Version| Metric |Value | |Stderr| |----------------|------:|--------|-----:|---|-----:| |kobest_boolq | 0|acc |0.9387|ยฑ |0.0064| | | |macro_f1|0.9387|ยฑ |0.0064| |kobest_copa | 0|acc |0.7590|ยฑ |0.0135| | | |macro_f1|0.7585|ยฑ |0.0135| |kobest_hellaswag| 0|acc |0.5080|ยฑ |0.0224| | | |acc_norm|0.5580|ยฑ |0.0222| | | |macro_f1|0.5049|ยฑ |0.0224| |kobest_sentineg | 0|acc |0.8489|ยฑ |0.0180| | | |macro_f1|0.8483|ยฑ |0.0180| hf-causal-experimental (pretrained=nlpai-lab/KULLM3,use_accelerate=true,trust_remote_code=true), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8 | Task |Version| Metric |Value | |Stderr| |----------------|------:|--------|-----:|---|-----:| |kobest_boolq | 0|acc |0.8896|ยฑ |0.0084| | | |macro_f1|0.8888|ยฑ |0.0084| |kobest_copa | 0|acc |0.6930|ยฑ |0.0146| | | |macro_f1|0.6925|ยฑ |0.0147| |kobest_hellaswag| 0|acc |0.4640|ยฑ |0.0223| | | |acc_norm|0.5240|ยฑ |0.0224| | | |macro_f1|0.4612|ยฑ |0.0223| |kobest_sentineg | 0|acc |0.6297|ยฑ |0.0243| | | |macro_f1|0.6255|ยฑ |0.0244|
MVRL/scalemae-vitlarge-800
MVRL
2024-05-23T01:15:56Z
0
0
null
[ "pytorch", "arxiv:2212.14532", "license:apache-2.0", "region:us" ]
null
2024-05-23T01:05:23Z
--- license: apache-2.0 --- Model: ScaleMAE (https://arxiv.org/abs/2212.14532) Variant: scalemae-vitlarge-800 Example Usage: ```python from huggingface_hub import hf_hub_download import torch hf_hub_download("MVRL/scalemae-vitlarge-800", "model.py", local_dir=".") from model import ScaleMAE_baseline model = ScaleMAE_baseline.from_pretrained("MVRL/scalemae-vitlarge-800") print(model(torch.randn(1, 3, 224, 224),patch_size=16).shape) ```
EleutherAI/Mistral-7B-v0.1-hemisphere-random-standardized-random-names
EleutherAI
2024-05-23T01:15:27Z
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T00:28:10Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
houbw/llama3_ruozhiba_ori_8_up_3
houbw
2024-05-23T01:15:16Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct", "base_model:finetune:unsloth/llama-3-8b-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-23T01:14:46Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-Instruct --- # Uploaded model - **Developed by:** houbw - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
leafspark/Mistral-7B-v0.3
leafspark
2024-05-23T01:02:22Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-23T00:42:24Z
--- license: apache-2.0 --- # Model Card for Mistral-7B-v0.3 The Mistral-7B-v0.3 Large Language Model (LLM) is a base model, updated from [Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/edit/main/README.md). Mistral-7B-v0.3 has the following changes compared to [Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/edit/main/README.md) - Extended vocabulary to 32768 - Supports v3 Tokenizer - Supports function calling ## Installation It is recommended to use `leafspark/Mistral-7B-v0.3` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling. ``` pip install mistral_inference ``` ## Download ```py from huggingface_hub import snapshot_download from pathlib import Path mistral_models_path = Path.home().joinpath('mistral_models', '7B-Instruct-v0.3') mistral_models_path.mkdir(parents=True, exist_ok=True) snapshot_download(repo_id="leafspark/Mistral-7B-v0.3", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path) ``` ## Limitations The base model does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lรฉlio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothรฉe Lacroix, Thรฉophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall
weifar/llama3-8b-500-v2
weifar
2024-05-23T00:58:42Z
78
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-23T00:56:01Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
julep-ai/dolphin-2.9.1-llama-3-70b-awq
julep-ai
2024-05-23T00:57:41Z
81
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
2024-05-22T22:19:31Z
--- library_name: transformers tags: [] --- # Model Card for dolphin-2.9.1-llama-3-70b-<u>awq</u> AWQ Quantized version of [cognitivecomputations/dolphin-2.9.1-llama-3-70b](/cognitivecomputations/dolphin-2.9.1-llama-3-70b). For use with vllm and other inference engines.
ssec-uw/OLMo-7B-Instruct-GGUF
ssec-uw
2024-05-23T00:48:02Z
128
6
null
[ "gguf", "text-generation", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-05-07T17:55:34Z
--- license: apache-2.0 language: - en pipeline_tag: text-generation --- # OLMo 7B-Instruct-GGUF > For more details on OLMO-7B-Instruct, refer to [Allen AI's OLMo-7B-Instruct model card](https://huggingface.co/allenai/OLMo-7B-Instruct). OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models. The OLMo base models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset. The Instruct version is trained on the [cleaned version of the UltraFeedback dataset](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned). OLMo 7B Instruct is trained for better question answering. They show the performance gain that OLMo base models can achieve with existing fine-tuning techniques. This version of the model is derived from [ssec-uw/OLMo-7B-Instruct-hf](https://huggingface.co/ssec-uw/OLMo-7B-Instruct-hf) as [GGUF format](https://huggingface.co/docs/hub/en/gguf), a binary format that is optimized for quick loading and saving of models, making it highly efficient for inference purposes. In addition to the model being in GGUF format, the model has been [quantized](https://huggingface.co/docs/optimum/en/concept_guides/quantization), to reduce the computational and memory costs of running inference. *We are currently working on adding all of the [Quantization Types](https://huggingface.co/docs/hub/en/gguf#quantization-types)*. These files are designed for use with [GGML](https://ggml.ai/) and executors based on GGML such as [llama.cpp](https://github.com/ggerganov/llama.cpp). ## Get Started To get started using one of the GGUF file, you can simply use [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python binding for `llama.cpp`. 1. Install `llama-cpp-python` of at least `v0.2.70` with pip. The following command will install a pre-built wheel with basic CPU support. For other installation methods, see [llama-cpp-python installation docs](https://github.com/abetlen/llama-cpp-python?tab=readme-ov-file#installation). ```bash pip install llama-cpp-python>=0.2.70 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu ``` 3. Download one of the GGUF file. In this example, we will download the [OLMo-7B-Instruct-Q4_K_M.gguf](https://huggingface.co/ssec-uw/OLMo-7B-Instruct-GGUF/resolve/main/OLMo-7B-Instruct-Q4_K_M.gguf?download=true), when the link is clicked. 4. Open up a python interpreter and run the following commands. For example, we can ask it: `What is a solar system?` *You will need to modify the `model_path` argument to where the GGUF model has been saved in your system* ```python from llama_cpp import Llama llm = Llama( model_path="path/to/OLMo-7B-Instruct-Q4_K_M.gguf" ) result_dict = llm.create_chat_completion( messages = [ { "role": "user", "content": "What is a solar system?" } ] ) print(result_dict['choices'][0]['message']['content']) ``` 5. That's it, you should see the result fairly quickly! Have fun! ๐Ÿค– ## Contact For errors in this model card, contact Don or Anant, {landungs, anmittal} at uw dot edu. ## Acknowledgement We would like to thank the hardworking folks at [Allen AI](https://huggingface.co/allenai) for providing the original model. Additionally, the work to convert and quantize the model was done by the [University of Washington Scientific Software Engineering Center (SSEC)](https://escience.washington.edu/software-engineering/ssec/), as part of the [Schmidt Sciences Virtual Institute for Scientific Software (VISS)](https://www.schmidtsciences.org/viss/).
fearlessdots/WizardLM-2-7B-abliterated
fearlessdots
2024-05-23T00:46:45Z
281
13
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T00:08:05Z
--- license: apache-2.0 --- # WizardLM-2-7B-abliterated This is the **WizardLM-2-7B** model with orthogonalized bfloat16 safetensor weights, based on the implementation by `@failspy`. For more info: - Original paper preview presenting the methodology: <https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction> - Jupyter notebook containing a implementation of the methodology, by `@failspy`: <https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb> ## GGUF Files I will upload some GGUF files here: <https://huggingface.co/fearlessdots/WizardLM-2-7B-abliterated-GGUF> ## Prompt Template This model uses the prompt format from **Vicuna** and supports **multi-turn** conversation. --- # Original model card: <p style="font-size:20px;" align="center"> ๐Ÿ  <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p> <p align="center"> ๐Ÿค— <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> โ€ข๐Ÿฑ <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> โ€ข ๐Ÿฆ <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> โ€ข ๐Ÿ“ƒ <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> โ€ข ๐Ÿ“ƒ <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> โ€ข ๐Ÿ“ƒ <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br> </p> <p align="center"> ๐Ÿ‘‹ Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a> </p> ## News ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ [2024/04/15] We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B. - WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works and consistently outperforms all the existing state-of-the-art opensource models. - WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. - WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models. For more details of WizardLM-2 please read our [release blog post](https://wizardlm.github.io/WizardLM2) and upcoming paper. ## Model Details * **Model name**: WizardLM-2 7B * **Developed by**: WizardLM@Microsoft AI * **Base model**: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) * **Parameters**: 7B * **Language(s)**: Multilingual * **Blog**: [Introducing WizardLM-2](https://wizardlm.github.io/WizardLM2) * **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM) * **Paper**: WizardLM-2 (Upcoming) * **License**: Apache2.0 ## Model Capacities **MT-Bench** We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales. <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> **Human Preferences Evaluation** We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. We report the win:loss rate without tie: - WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314. - WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat. - WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta. <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Method Overview We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://wizardlm.github.io/WizardLM2) for more details of this system. <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Usage โ—<b>Note for model system prompts usage:</b> <b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following: ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s> USER: Who are you? ASSISTANT: I am WizardLM.</s>...... ``` <b> Inference WizardLM-2 Demo Script</b> We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.