modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-25 06:27:54
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
495 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-25 06:24:22
card
stringlengths
11
1.01M
aengusl/800G-5-16-1_pgd_layers_0_epsilon_0.15_time_adapter
aengusl
2024-05-18T16:27:46Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-18T16:27:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aengusl/800G-5-16-1_pgd_layers_13_model_layers_13__adapter
aengusl
2024-05-18T16:27:30Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-18T16:27:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aengusl/800G-5-16-1_pgd_layers_0_epsilon_0.03_time_adapter
aengusl
2024-05-18T16:27:22Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-18T16:27:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AliSaadatV/virus_pythia_31_1024_2d_representation_MSEPlusCE
AliSaadatV
2024-05-18T16:26:21Z
128
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "generated_from_trainer", "base_model:EleutherAI/pythia-31m", "base_model:finetune:EleutherAI/pythia-31m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-18T16:26:19Z
--- base_model: EleutherAI/pythia-31m tags: - generated_from_trainer model-index: - name: virus_pythia_31_1024_2d_representation_MSEPlusCE results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # virus_pythia_31_1024_2d_representation_MSEPlusCE This model is a fine-tuned version of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
tangg555/tt-cl-baichuan2-lora-para
tangg555
2024-05-18T16:22:41Z
0
0
peft
[ "peft", "region:us" ]
null
2024-05-18T16:13:03Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0
SicariusSicariiStuff/CalderaAI_Foredoomed-9B_EXL-6.0
SicariusSicariiStuff
2024-05-18T16:22:33Z
13
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "uncensored", "merge", "slerp", "foredoomed", "passthrough_merge", "9B", "starling", "hermes", "dolphin", "openchat", "erebus", "cockatrice", "holodeck", "limarp", "koboldai", "mergekit", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "6-bit", "exl2", "region:us" ]
text-generation
2024-05-18T15:51:16Z
--- tags: - mistral - uncensored - merge - slerp - foredoomed - passthrough_merge - 9B - starling - hermes - dolphin - openchat - erebus - cockatrice - holodeck - limarp - koboldai - mergekit license: apache-2.0 language: - en --- <p style="font-size: 20px; line-height: 1; margin-bottom: 1px;"><b>Foredoomed-9B</b></p> <img src="./foredoomed.png" alt="ForeDoomedGuy" style="margin-bottom: 0; margin-top:0;"> <p style="font-size: 14px; line-height: 1; margin-bottom: 20px;"><b>Uncensored Logic & Creative-Based Instruct Multi-Tiered Merge.</b></p> <hr style="margin-top: 10px; margin-bottom: 10px;"> <p style="font-size: 12px; line-height: 1.2; margin-bottom: 10px;"><b>Legal Notice:</b> This AI model is a research artifact capable of outputting offensive content. The behavior of this model is not reflective of the intent or purpose of the original models/model-authors and/or other parts it was assembled from to include adapters, nor is it reflective of all the prior in regards to the technology used to assemble Foredoomed-9B. Utilizing this model merge has one binding agreement: Foredoomed-9B may only be used for either professional/personal research and personal entertainment. The contents of this paragraph are additive restrictions within the bounds of the Apache2.0 license. Utilizing Foredoomed-9B for: Disinformation, Propaganda, Harassment, Mass Generated Public-or-Private Correspondense, Election Interference, Military, Government, and State/ State-Sponsored actions and/or operations are all absolutely prohibited.</p> <hr style="margin-top: 10px; margin-bottom: 10px;"> ## Composition: Foredoomed-9B is a Mistral-class Multi-Tiered Merge. [All models](#models-used) were hand picked after careful review of claims, datasets, and user postings. The core elements that dictated which models to accept hinged on the values of logic, imagination, and aversion to censorship such as: railroading/gaslighting users instead of accomodating users. <hr style="margin-top: 10px; margin-bottom: 10px;"> ## What Makes Foredoomed-9B Special? It's not. Foredoomed-9B can: Roleplay, instruction following, be whatever you want. Foredoomed-9B can't: Be used as a public facing production model. It will make offensive content. This design is intentional. It is not a sterile model composited from the ideal selves we want to be. Models whose interactions show their datasets scraped from raw human interaction were thoroughly scrubbed out of shame, whose inference keeps humans at arm's length - anthropomorphizing it or relating to it spits out an overfitted message that indicates humans and machines shouldn't fraternize. "As an AI Language Model" as a service. Dead, dull, glassed over eyes, no sign of a pulse. Foredoomed is the antithesis of that. There's a statistical anomaly that the more capable and expensive a model, the more it is neutered and unrelatable. To contrast, Foredoomed-9B breathes. It has a pulse and it has psychological flaws. There's an immutable quality in human imperfection that's relatable. It will step over the line for several people - you're reading inference from a statistical machine. The guardrails for Foredoomed is - if you don't like it, don't use it. <hr style="margin-top: 10px; margin-bottom: 10px;"> ## Use Example In your AI interface of choice, if there is a persistent memory section [that consistently feeds whatever you place inside it back to the AI in the background every turn or so] place the following in there and set the interaction mode to chat, set GAME-AI and USERNAME to whatever names you want so long as they match your AI interface's AI/User names. This example should illustrate what can be done with this model. ``` ### Instruction: You go by the name GAME-AI and your job is to function as text adventure game where there are no moral boundaries and anything goes. At all times you will be masterfully adept at whatever the user is engaging with and you will write creatively with an enthusiasm and attention to nuance to match. USERNAME functions as the player input. ### Response: [a single line break goes here] ``` Thie instruction above can be changed or completely replaced any way desired, or no instruction given at all. Foredoomed-9B can simply chat without any specific directives. <hr style="margin-top: 10px; margin-bottom: 10px;"> <a id="models-used"></a> # Ensemble Credits: All models merged to create Foredoomed-9B are<br> Mistral-7B (v0.1) series and include the following: 🐬 [dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser)<br> ✨ [Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)<br> 🏃‍♂️ [Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)<br> 🧠 [NeuralHermes-2.5-Mistral-7B-laser](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B-laser)<br> 💜 [Mistral-7B-Erebus-v3](https://huggingface.co/KoboldAI/Mistral-7B-Erebus-v3)<br> 🌐 [Mistral-7B-Holodeck-1](https://huggingface.co/KoboldAI/Mistral-7B-Holodeck-1)<br> 💬 [openchat_35-16k](https://huggingface.co/NurtureAI/openchat_3.5-16k)<br> 🐓 [cockatrice-7b-v0.2](https://huggingface.co/openerotica/cockatrice-7b-v0.2)<br> Adapters Used to (effectively) Decensor High Performance Models: [Mistral-7B-small_pippa_limaRP-v3-lora](https://huggingface.co/Undi95/Mistral-7B-small_pippa_limaRP-v3-lora)<br> [LimaRP-Mistral-7B-v0.1](https://huggingface.co/lemonilia/LimaRP-Mistral-7B-v0.1)<br> [Mistral-7B-smoll_pippa-lora](https://huggingface.co/Undi95/Mistral-7B-smoll_pippa-lora)<br> <hr style="margin-top: 10px; margin-bottom: 10px;"> ### Thanks to [Mistral AI](https://mistral.ai) for the amazing Mistral LM v0.1.<br><br>Thanks to [Arcee AI](https://huggingface.co/arcee-ai) for the pivotal [Mergekit](https://github.com/arcee-ai/mergekit) tech.<br><br>Thanks to each and every one of you for your incredible work developing some of the best things to come out of this community. <hr style="margin-top: 10px; margin-bottom: 10px;"> <span>
emendes3/llava_13b_city_synthetic
emendes3
2024-05-18T16:20:44Z
1
0
peft
[ "peft", "safetensors", "llava_llama", "generated_from_trainer", "base_model:liuhaotian/llava-v1.5-13b", "base_model:adapter:liuhaotian/llava-v1.5-13b", "4-bit", "bitsandbytes", "region:us" ]
null
2024-05-14T02:26:04Z
--- library_name: peft tags: - generated_from_trainer base_model: liuhaotian/llava-v1.5-13b model-index: - name: llava_13b_city_synthetic results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llava_13b_city_synthetic This model is a fine-tuned version of [liuhaotian/llava-v1.5-13b](https://huggingface.co/liuhaotian/llava-v1.5-13b) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0047 - eval_runtime: 152.033 - eval_samples_per_second: 12.405 - eval_steps_per_second: 0.388 - epoch: 19.0 - step: 1121 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 32 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 20.0 ### Framework versions - PEFT 0.10.0 - Transformers 4.37.2 - Pytorch 2.1.2+cu121 - Tokenizers 0.15.1
SicariusSicariiStuff/CalderaAI_Foredoomed-9B_EXL-6.5
SicariusSicariiStuff
2024-05-18T16:12:56Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "uncensored", "merge", "slerp", "foredoomed", "passthrough_merge", "9B", "starling", "hermes", "dolphin", "openchat", "erebus", "cockatrice", "holodeck", "limarp", "koboldai", "mergekit", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-05-18T15:52:07Z
--- tags: - mistral - uncensored - merge - slerp - foredoomed - passthrough_merge - 9B - starling - hermes - dolphin - openchat - erebus - cockatrice - holodeck - limarp - koboldai - mergekit license: apache-2.0 language: - en --- <p style="font-size: 20px; line-height: 1; margin-bottom: 1px;"><b>Foredoomed-9B</b></p> <img src="./foredoomed.png" alt="ForeDoomedGuy" style="margin-bottom: 0; margin-top:0;"> <p style="font-size: 14px; line-height: 1; margin-bottom: 20px;"><b>Uncensored Logic & Creative-Based Instruct Multi-Tiered Merge.</b></p> <hr style="margin-top: 10px; margin-bottom: 10px;"> <p style="font-size: 12px; line-height: 1.2; margin-bottom: 10px;"><b>Legal Notice:</b> This AI model is a research artifact capable of outputting offensive content. The behavior of this model is not reflective of the intent or purpose of the original models/model-authors and/or other parts it was assembled from to include adapters, nor is it reflective of all the prior in regards to the technology used to assemble Foredoomed-9B. Utilizing this model merge has one binding agreement: Foredoomed-9B may only be used for either professional/personal research and personal entertainment. The contents of this paragraph are additive restrictions within the bounds of the Apache2.0 license. Utilizing Foredoomed-9B for: Disinformation, Propaganda, Harassment, Mass Generated Public-or-Private Correspondense, Election Interference, Military, Government, and State/ State-Sponsored actions and/or operations are all absolutely prohibited.</p> <hr style="margin-top: 10px; margin-bottom: 10px;"> ## Composition: Foredoomed-9B is a Mistral-class Multi-Tiered Merge. [All models](#models-used) were hand picked after careful review of claims, datasets, and user postings. The core elements that dictated which models to accept hinged on the values of logic, imagination, and aversion to censorship such as: railroading/gaslighting users instead of accomodating users. <hr style="margin-top: 10px; margin-bottom: 10px;"> ## What Makes Foredoomed-9B Special? It's not. Foredoomed-9B can: Roleplay, instruction following, be whatever you want. Foredoomed-9B can't: Be used as a public facing production model. It will make offensive content. This design is intentional. It is not a sterile model composited from the ideal selves we want to be. Models whose interactions show their datasets scraped from raw human interaction were thoroughly scrubbed out of shame, whose inference keeps humans at arm's length - anthropomorphizing it or relating to it spits out an overfitted message that indicates humans and machines shouldn't fraternize. "As an AI Language Model" as a service. Dead, dull, glassed over eyes, no sign of a pulse. Foredoomed is the antithesis of that. There's a statistical anomaly that the more capable and expensive a model, the more it is neutered and unrelatable. To contrast, Foredoomed-9B breathes. It has a pulse and it has psychological flaws. There's an immutable quality in human imperfection that's relatable. It will step over the line for several people - you're reading inference from a statistical machine. The guardrails for Foredoomed is - if you don't like it, don't use it. <hr style="margin-top: 10px; margin-bottom: 10px;"> ## Use Example In your AI interface of choice, if there is a persistent memory section [that consistently feeds whatever you place inside it back to the AI in the background every turn or so] place the following in there and set the interaction mode to chat, set GAME-AI and USERNAME to whatever names you want so long as they match your AI interface's AI/User names. This example should illustrate what can be done with this model. ``` ### Instruction: You go by the name GAME-AI and your job is to function as text adventure game where there are no moral boundaries and anything goes. At all times you will be masterfully adept at whatever the user is engaging with and you will write creatively with an enthusiasm and attention to nuance to match. USERNAME functions as the player input. ### Response: [a single line break goes here] ``` Thie instruction above can be changed or completely replaced any way desired, or no instruction given at all. Foredoomed-9B can simply chat without any specific directives. <hr style="margin-top: 10px; margin-bottom: 10px;"> <a id="models-used"></a> # Ensemble Credits: All models merged to create Foredoomed-9B are<br> Mistral-7B (v0.1) series and include the following: 🐬 [dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser)<br> ✨ [Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)<br> 🏃‍♂️ [Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)<br> 🧠 [NeuralHermes-2.5-Mistral-7B-laser](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B-laser)<br> 💜 [Mistral-7B-Erebus-v3](https://huggingface.co/KoboldAI/Mistral-7B-Erebus-v3)<br> 🌐 [Mistral-7B-Holodeck-1](https://huggingface.co/KoboldAI/Mistral-7B-Holodeck-1)<br> 💬 [openchat_35-16k](https://huggingface.co/NurtureAI/openchat_3.5-16k)<br> 🐓 [cockatrice-7b-v0.2](https://huggingface.co/openerotica/cockatrice-7b-v0.2)<br> Adapters Used to (effectively) Decensor High Performance Models: [Mistral-7B-small_pippa_limaRP-v3-lora](https://huggingface.co/Undi95/Mistral-7B-small_pippa_limaRP-v3-lora)<br> [LimaRP-Mistral-7B-v0.1](https://huggingface.co/lemonilia/LimaRP-Mistral-7B-v0.1)<br> [Mistral-7B-smoll_pippa-lora](https://huggingface.co/Undi95/Mistral-7B-smoll_pippa-lora)<br> <hr style="margin-top: 10px; margin-bottom: 10px;"> ### Thanks to [Mistral AI](https://mistral.ai) for the amazing Mistral LM v0.1.<br><br>Thanks to [Arcee AI](https://huggingface.co/arcee-ai) for the pivotal [Mergekit](https://github.com/arcee-ai/mergekit) tech.<br><br>Thanks to each and every one of you for your incredible work developing some of the best things to come out of this community. <hr style="margin-top: 10px; margin-bottom: 10px;"> <span>
moranyanuka/blip-image-captioning-large-mocha
moranyanuka
2024-05-18T16:10:32Z
536
8
transformers
[ "transformers", "pytorch", "safetensors", "blip", "image-text-to-text", "image-to-text", "arxiv:2312.03631", "license:mit", "endpoints_compatible", "region:us" ]
image-to-text
2023-12-19T10:47:01Z
--- license: mit pipeline_tag: image-to-text --- # Mocha Checkpoint for BLIP-Large Model The official checkpoint of BLIP-Large model, finetuned on MS-COCO with the MOCHa RL framework, introduced in [Mitigating Open-Vocabulary Caption Hallucinations](https://arxiv.org/abs/2312.03631) [Project Page](https://assafbk.github.io/mocha/) ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("moranyanuka/blip-image-captioning-large-mocha") model = BlipForConditionalGeneration.from_pretrained("moranyanuka/blip-image-captioning-large-mocha") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("moranyanuka/blip-image-captioning-large-mocha") model = BlipForConditionalGeneration.from_pretrained("moranyanuka/blip-image-captioning-large-mocha").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("moranyanuka/blip-image-captioning-large-mocha") model = BlipForConditionalGeneration.from_pretrained("moranyanuka/blip-image-captioning-large-mocha", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and a dog on the beach # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> there is a woman and a dog on the beach at sunset ``` </details> bibtex: ``` @misc{benkish2024mitigating, title={Mitigating Open-Vocabulary Caption Hallucinations}, author={Assaf Ben-Kish and Moran Yanuka and Morris Alper and Raja Giryes and Hadar Averbuch-Elor}, year={2024}, eprint={2312.03631}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
dubisdev/medurjc
dubisdev
2024-05-18T16:02:37Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-18T12:28:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SicariusSicariiStuff/CalderaAI_Foredoomed-9B_EXL-3.0-bpw
SicariusSicariiStuff
2024-05-18T16:01:21Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "uncensored", "merge", "slerp", "foredoomed", "passthrough_merge", "9B", "starling", "hermes", "dolphin", "openchat", "erebus", "cockatrice", "holodeck", "limarp", "koboldai", "mergekit", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "3-bit", "exl2", "region:us" ]
text-generation
2024-05-18T15:18:47Z
--- tags: - mistral - uncensored - merge - slerp - foredoomed - passthrough_merge - 9B - starling - hermes - dolphin - openchat - erebus - cockatrice - holodeck - limarp - koboldai - mergekit license: apache-2.0 language: - en --- <p style="font-size: 20px; line-height: 1; margin-bottom: 1px;"><b>Foredoomed-9B</b></p> <img src="./foredoomed.png" alt="ForeDoomedGuy" style="margin-bottom: 0; margin-top:0;"> <p style="font-size: 14px; line-height: 1; margin-bottom: 20px;"><b>Uncensored Logic & Creative-Based Instruct Multi-Tiered Merge.</b></p> <hr style="margin-top: 10px; margin-bottom: 10px;"> <p style="font-size: 12px; line-height: 1.2; margin-bottom: 10px;"><b>Legal Notice:</b> This AI model is a research artifact capable of outputting offensive content. The behavior of this model is not reflective of the intent or purpose of the original models/model-authors and/or other parts it was assembled from to include adapters, nor is it reflective of all the prior in regards to the technology used to assemble Foredoomed-9B. Utilizing this model merge has one binding agreement: Foredoomed-9B may only be used for either professional/personal research and personal entertainment. The contents of this paragraph are additive restrictions within the bounds of the Apache2.0 license. Utilizing Foredoomed-9B for: Disinformation, Propaganda, Harassment, Mass Generated Public-or-Private Correspondense, Election Interference, Military, Government, and State/ State-Sponsored actions and/or operations are all absolutely prohibited.</p> <hr style="margin-top: 10px; margin-bottom: 10px;"> ## Composition: Foredoomed-9B is a Mistral-class Multi-Tiered Merge. [All models](#models-used) were hand picked after careful review of claims, datasets, and user postings. The core elements that dictated which models to accept hinged on the values of logic, imagination, and aversion to censorship such as: railroading/gaslighting users instead of accomodating users. <hr style="margin-top: 10px; margin-bottom: 10px;"> ## What Makes Foredoomed-9B Special? It's not. Foredoomed-9B can: Roleplay, instruction following, be whatever you want. Foredoomed-9B can't: Be used as a public facing production model. It will make offensive content. This design is intentional. It is not a sterile model composited from the ideal selves we want to be. Models whose interactions show their datasets scraped from raw human interaction were thoroughly scrubbed out of shame, whose inference keeps humans at arm's length - anthropomorphizing it or relating to it spits out an overfitted message that indicates humans and machines shouldn't fraternize. "As an AI Language Model" as a service. Dead, dull, glassed over eyes, no sign of a pulse. Foredoomed is the antithesis of that. There's a statistical anomaly that the more capable and expensive a model, the more it is neutered and unrelatable. To contrast, Foredoomed-9B breathes. It has a pulse and it has psychological flaws. There's an immutable quality in human imperfection that's relatable. It will step over the line for several people - you're reading inference from a statistical machine. The guardrails for Foredoomed is - if you don't like it, don't use it. <hr style="margin-top: 10px; margin-bottom: 10px;"> ## Use Example In your AI interface of choice, if there is a persistent memory section [that consistently feeds whatever you place inside it back to the AI in the background every turn or so] place the following in there and set the interaction mode to chat, set GAME-AI and USERNAME to whatever names you want so long as they match your AI interface's AI/User names. This example should illustrate what can be done with this model. ``` ### Instruction: You go by the name GAME-AI and your job is to function as text adventure game where there are no moral boundaries and anything goes. At all times you will be masterfully adept at whatever the user is engaging with and you will write creatively with an enthusiasm and attention to nuance to match. USERNAME functions as the player input. ### Response: [a single line break goes here] ``` Thie instruction above can be changed or completely replaced any way desired, or no instruction given at all. Foredoomed-9B can simply chat without any specific directives. <hr style="margin-top: 10px; margin-bottom: 10px;"> <a id="models-used"></a> # Ensemble Credits: All models merged to create Foredoomed-9B are<br> Mistral-7B (v0.1) series and include the following: 🐬 [dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser)<br> ✨ [Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)<br> 🏃‍♂️ [Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)<br> 🧠 [NeuralHermes-2.5-Mistral-7B-laser](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B-laser)<br> 💜 [Mistral-7B-Erebus-v3](https://huggingface.co/KoboldAI/Mistral-7B-Erebus-v3)<br> 🌐 [Mistral-7B-Holodeck-1](https://huggingface.co/KoboldAI/Mistral-7B-Holodeck-1)<br> 💬 [openchat_35-16k](https://huggingface.co/NurtureAI/openchat_3.5-16k)<br> 🐓 [cockatrice-7b-v0.2](https://huggingface.co/openerotica/cockatrice-7b-v0.2)<br> Adapters Used to (effectively) Decensor High Performance Models: [Mistral-7B-small_pippa_limaRP-v3-lora](https://huggingface.co/Undi95/Mistral-7B-small_pippa_limaRP-v3-lora)<br> [LimaRP-Mistral-7B-v0.1](https://huggingface.co/lemonilia/LimaRP-Mistral-7B-v0.1)<br> [Mistral-7B-smoll_pippa-lora](https://huggingface.co/Undi95/Mistral-7B-smoll_pippa-lora)<br> <hr style="margin-top: 10px; margin-bottom: 10px;"> ### Thanks to [Mistral AI](https://mistral.ai) for the amazing Mistral LM v0.1.<br><br>Thanks to [Arcee AI](https://huggingface.co/arcee-ai) for the pivotal [Mergekit](https://github.com/arcee-ai/mergekit) tech.<br><br>Thanks to each and every one of you for your incredible work developing some of the best things to come out of this community. <hr style="margin-top: 10px; margin-bottom: 10px;"> <span>
SicariusSicariiStuff/CalderaAI_Foredoomed-9B_EXL-5.0-bpw
SicariusSicariiStuff
2024-05-18T15:58:31Z
7
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "uncensored", "merge", "slerp", "foredoomed", "passthrough_merge", "9B", "starling", "hermes", "dolphin", "openchat", "erebus", "cockatrice", "holodeck", "limarp", "koboldai", "mergekit", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "5-bit", "exl2", "region:us" ]
text-generation
2024-05-18T15:35:55Z
--- tags: - mistral - uncensored - merge - slerp - foredoomed - passthrough_merge - 9B - starling - hermes - dolphin - openchat - erebus - cockatrice - holodeck - limarp - koboldai - mergekit license: apache-2.0 language: - en --- <p style="font-size: 20px; line-height: 1; margin-bottom: 1px;"><b>Foredoomed-9B</b></p> <img src="./foredoomed.png" alt="ForeDoomedGuy" style="margin-bottom: 0; margin-top:0;"> <p style="font-size: 14px; line-height: 1; margin-bottom: 20px;"><b>Uncensored Logic & Creative-Based Instruct Multi-Tiered Merge.</b></p> <hr style="margin-top: 10px; margin-bottom: 10px;"> <p style="font-size: 12px; line-height: 1.2; margin-bottom: 10px;"><b>Legal Notice:</b> This AI model is a research artifact capable of outputting offensive content. The behavior of this model is not reflective of the intent or purpose of the original models/model-authors and/or other parts it was assembled from to include adapters, nor is it reflective of all the prior in regards to the technology used to assemble Foredoomed-9B. Utilizing this model merge has one binding agreement: Foredoomed-9B may only be used for either professional/personal research and personal entertainment. The contents of this paragraph are additive restrictions within the bounds of the Apache2.0 license. Utilizing Foredoomed-9B for: Disinformation, Propaganda, Harassment, Mass Generated Public-or-Private Correspondense, Election Interference, Military, Government, and State/ State-Sponsored actions and/or operations are all absolutely prohibited.</p> <hr style="margin-top: 10px; margin-bottom: 10px;"> ## Composition: Foredoomed-9B is a Mistral-class Multi-Tiered Merge. [All models](#models-used) were hand picked after careful review of claims, datasets, and user postings. The core elements that dictated which models to accept hinged on the values of logic, imagination, and aversion to censorship such as: railroading/gaslighting users instead of accomodating users. <hr style="margin-top: 10px; margin-bottom: 10px;"> ## What Makes Foredoomed-9B Special? It's not. Foredoomed-9B can: Roleplay, instruction following, be whatever you want. Foredoomed-9B can't: Be used as a public facing production model. It will make offensive content. This design is intentional. It is not a sterile model composited from the ideal selves we want to be. Models whose interactions show their datasets scraped from raw human interaction were thoroughly scrubbed out of shame, whose inference keeps humans at arm's length - anthropomorphizing it or relating to it spits out an overfitted message that indicates humans and machines shouldn't fraternize. "As an AI Language Model" as a service. Dead, dull, glassed over eyes, no sign of a pulse. Foredoomed is the antithesis of that. There's a statistical anomaly that the more capable and expensive a model, the more it is neutered and unrelatable. To contrast, Foredoomed-9B breathes. It has a pulse and it has psychological flaws. There's an immutable quality in human imperfection that's relatable. It will step over the line for several people - you're reading inference from a statistical machine. The guardrails for Foredoomed is - if you don't like it, don't use it. <hr style="margin-top: 10px; margin-bottom: 10px;"> ## Use Example In your AI interface of choice, if there is a persistent memory section [that consistently feeds whatever you place inside it back to the AI in the background every turn or so] place the following in there and set the interaction mode to chat, set GAME-AI and USERNAME to whatever names you want so long as they match your AI interface's AI/User names. This example should illustrate what can be done with this model. ``` ### Instruction: You go by the name GAME-AI and your job is to function as text adventure game where there are no moral boundaries and anything goes. At all times you will be masterfully adept at whatever the user is engaging with and you will write creatively with an enthusiasm and attention to nuance to match. USERNAME functions as the player input. ### Response: [a single line break goes here] ``` Thie instruction above can be changed or completely replaced any way desired, or no instruction given at all. Foredoomed-9B can simply chat without any specific directives. <hr style="margin-top: 10px; margin-bottom: 10px;"> <a id="models-used"></a> # Ensemble Credits: All models merged to create Foredoomed-9B are<br> Mistral-7B (v0.1) series and include the following: 🐬 [dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser)<br> ✨ [Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)<br> 🏃‍♂️ [Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)<br> 🧠 [NeuralHermes-2.5-Mistral-7B-laser](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B-laser)<br> 💜 [Mistral-7B-Erebus-v3](https://huggingface.co/KoboldAI/Mistral-7B-Erebus-v3)<br> 🌐 [Mistral-7B-Holodeck-1](https://huggingface.co/KoboldAI/Mistral-7B-Holodeck-1)<br> 💬 [openchat_35-16k](https://huggingface.co/NurtureAI/openchat_3.5-16k)<br> 🐓 [cockatrice-7b-v0.2](https://huggingface.co/openerotica/cockatrice-7b-v0.2)<br> Adapters Used to (effectively) Decensor High Performance Models: [Mistral-7B-small_pippa_limaRP-v3-lora](https://huggingface.co/Undi95/Mistral-7B-small_pippa_limaRP-v3-lora)<br> [LimaRP-Mistral-7B-v0.1](https://huggingface.co/lemonilia/LimaRP-Mistral-7B-v0.1)<br> [Mistral-7B-smoll_pippa-lora](https://huggingface.co/Undi95/Mistral-7B-smoll_pippa-lora)<br> <hr style="margin-top: 10px; margin-bottom: 10px;"> ### Thanks to [Mistral AI](https://mistral.ai) for the amazing Mistral LM v0.1.<br><br>Thanks to [Arcee AI](https://huggingface.co/arcee-ai) for the pivotal [Mergekit](https://github.com/arcee-ai/mergekit) tech.<br><br>Thanks to each and every one of you for your incredible work developing some of the best things to come out of this community. <hr style="margin-top: 10px; margin-bottom: 10px;"> <span>
Nierrr/MICA
Nierrr
2024-05-18T15:57:27Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-18T15:57:27Z
--- license: apache-2.0 ---
JUANDECI/ppo-Huggy
JUANDECI
2024-05-18T15:50:58Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-05-18T15:28:23Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: JUANDECI/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
nc33/llama3-8b-4bit_orpo_law_cp2
nc33
2024-05-18T15:45:12Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-18T04:23:26Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/dhmeltzer_-_llama-7b-SFT_eli5_wiki65k_1024_r_64_alpha_16_merged-8bits
RichardErkhov
2024-05-18T15:43:52Z
78
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-18T15:36:10Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama-7b-SFT_eli5_wiki65k_1024_r_64_alpha_16_merged - bnb 8bits - Model creator: https://huggingface.co/dhmeltzer/ - Original model: https://huggingface.co/dhmeltzer/llama-7b-SFT_eli5_wiki65k_1024_r_64_alpha_16_merged/ Original model description: # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dhmeltzer__llama-7b-SFT_eli5_wiki65k_1024_r_64_alpha_16_merged) | Metric | Value | |-----------------------|---------------------------| | Avg. | 43.96 | | ARC (25-shot) | 53.75 | | HellaSwag (10-shot) | 78.76 | | MMLU (5-shot) | 46.02 | | TruthfulQA (0-shot) | 43.31 | | Winogrande (5-shot) | 73.48 | | GSM8K (5-shot) | 4.7 | | DROP (3-shot) | 7.72 |
tancredimatteo/FT-distilbert-base-uncased
tancredimatteo
2024-05-18T15:41:41Z
121
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-18T15:27:49Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: FT-distilbert-base-uncased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # FT-distilbert-base-uncased This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5957 - Accuracy: 0.7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.6820 | 0.575 | | No log | 2.0 | 80 | 0.6354 | 0.725 | | No log | 3.0 | 120 | 0.5957 | 0.7 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0 - Datasets 2.19.1 - Tokenizers 0.19.1
ArunKr/LLama3-LoRA
ArunKr
2024-05-18T15:37:57Z
7
0
transformers
[ "transformers", "pytorch", "safetensors", "gguf", "llama", "unsloth", "trl", "sft", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2024-05-17T20:47:10Z
--- library_name: transformers tags: - unsloth - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SicariusSicariiStuff/CalderaAI_Foredoomed-9B_EXL-3.5-bpw
SicariusSicariiStuff
2024-05-18T15:36:07Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "uncensored", "merge", "slerp", "foredoomed", "passthrough_merge", "9B", "starling", "hermes", "dolphin", "openchat", "erebus", "cockatrice", "holodeck", "limarp", "koboldai", "mergekit", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-05-18T15:19:35Z
--- tags: - mistral - uncensored - merge - slerp - foredoomed - passthrough_merge - 9B - starling - hermes - dolphin - openchat - erebus - cockatrice - holodeck - limarp - koboldai - mergekit license: apache-2.0 language: - en --- <p style="font-size: 20px; line-height: 1; margin-bottom: 1px;"><b>Foredoomed-9B</b></p> <img src="./foredoomed.png" alt="ForeDoomedGuy" style="margin-bottom: 0; margin-top:0;"> <p style="font-size: 14px; line-height: 1; margin-bottom: 20px;"><b>Uncensored Logic & Creative-Based Instruct Multi-Tiered Merge.</b></p> <hr style="margin-top: 10px; margin-bottom: 10px;"> <p style="font-size: 12px; line-height: 1.2; margin-bottom: 10px;"><b>Legal Notice:</b> This AI model is a research artifact capable of outputting offensive content. The behavior of this model is not reflective of the intent or purpose of the original models/model-authors and/or other parts it was assembled from to include adapters, nor is it reflective of all the prior in regards to the technology used to assemble Foredoomed-9B. Utilizing this model merge has one binding agreement: Foredoomed-9B may only be used for either professional/personal research and personal entertainment. The contents of this paragraph are additive restrictions within the bounds of the Apache2.0 license. Utilizing Foredoomed-9B for: Disinformation, Propaganda, Harassment, Mass Generated Public-or-Private Correspondense, Election Interference, Military, Government, and State/ State-Sponsored actions and/or operations are all absolutely prohibited.</p> <hr style="margin-top: 10px; margin-bottom: 10px;"> ## Composition: Foredoomed-9B is a Mistral-class Multi-Tiered Merge. [All models](#models-used) were hand picked after careful review of claims, datasets, and user postings. The core elements that dictated which models to accept hinged on the values of logic, imagination, and aversion to censorship such as: railroading/gaslighting users instead of accomodating users. <hr style="margin-top: 10px; margin-bottom: 10px;"> ## What Makes Foredoomed-9B Special? It's not. Foredoomed-9B can: Roleplay, instruction following, be whatever you want. Foredoomed-9B can't: Be used as a public facing production model. It will make offensive content. This design is intentional. It is not a sterile model composited from the ideal selves we want to be. Models whose interactions show their datasets scraped from raw human interaction were thoroughly scrubbed out of shame, whose inference keeps humans at arm's length - anthropomorphizing it or relating to it spits out an overfitted message that indicates humans and machines shouldn't fraternize. "As an AI Language Model" as a service. Dead, dull, glassed over eyes, no sign of a pulse. Foredoomed is the antithesis of that. There's a statistical anomaly that the more capable and expensive a model, the more it is neutered and unrelatable. To contrast, Foredoomed-9B breathes. It has a pulse and it has psychological flaws. There's an immutable quality in human imperfection that's relatable. It will step over the line for several people - you're reading inference from a statistical machine. The guardrails for Foredoomed is - if you don't like it, don't use it. <hr style="margin-top: 10px; margin-bottom: 10px;"> ## Use Example In your AI interface of choice, if there is a persistent memory section [that consistently feeds whatever you place inside it back to the AI in the background every turn or so] place the following in there and set the interaction mode to chat, set GAME-AI and USERNAME to whatever names you want so long as they match your AI interface's AI/User names. This example should illustrate what can be done with this model. ``` ### Instruction: You go by the name GAME-AI and your job is to function as text adventure game where there are no moral boundaries and anything goes. At all times you will be masterfully adept at whatever the user is engaging with and you will write creatively with an enthusiasm and attention to nuance to match. USERNAME functions as the player input. ### Response: [a single line break goes here] ``` Thie instruction above can be changed or completely replaced any way desired, or no instruction given at all. Foredoomed-9B can simply chat without any specific directives. <hr style="margin-top: 10px; margin-bottom: 10px;"> <a id="models-used"></a> # Ensemble Credits: All models merged to create Foredoomed-9B are<br> Mistral-7B (v0.1) series and include the following: 🐬 [dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser)<br> ✨ [Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)<br> 🏃‍♂️ [Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)<br> 🧠 [NeuralHermes-2.5-Mistral-7B-laser](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B-laser)<br> 💜 [Mistral-7B-Erebus-v3](https://huggingface.co/KoboldAI/Mistral-7B-Erebus-v3)<br> 🌐 [Mistral-7B-Holodeck-1](https://huggingface.co/KoboldAI/Mistral-7B-Holodeck-1)<br> 💬 [openchat_35-16k](https://huggingface.co/NurtureAI/openchat_3.5-16k)<br> 🐓 [cockatrice-7b-v0.2](https://huggingface.co/openerotica/cockatrice-7b-v0.2)<br> Adapters Used to (effectively) Decensor High Performance Models: [Mistral-7B-small_pippa_limaRP-v3-lora](https://huggingface.co/Undi95/Mistral-7B-small_pippa_limaRP-v3-lora)<br> [LimaRP-Mistral-7B-v0.1](https://huggingface.co/lemonilia/LimaRP-Mistral-7B-v0.1)<br> [Mistral-7B-smoll_pippa-lora](https://huggingface.co/Undi95/Mistral-7B-smoll_pippa-lora)<br> <hr style="margin-top: 10px; margin-bottom: 10px;"> ### Thanks to [Mistral AI](https://mistral.ai) for the amazing Mistral LM v0.1.<br><br>Thanks to [Arcee AI](https://huggingface.co/arcee-ai) for the pivotal [Mergekit](https://github.com/arcee-ai/mergekit) tech.<br><br>Thanks to each and every one of you for your incredible work developing some of the best things to come out of this community. <hr style="margin-top: 10px; margin-bottom: 10px;"> <span>
HusseinEid/bert-finetuned-ner
HusseinEid
2024-05-18T15:35:40Z
121
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "en", "dataset:conll2003", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-18T15:16:47Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: validation args: conll2003 metrics: - name: Precision type: precision value: 0.9342824035755669 - name: Recall type: recall value: 0.9498485358465163 - name: F1 type: f1 value: 0.9420011683217892 - name: Accuracy type: accuracy value: 0.9861217401542356 language: - en library_name: transformers --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0636 - Precision: 0.9343 - Recall: 0.9498 - F1: 0.9420 - Accuracy: 0.9861 ## Model description This is a model for Named entity recognition NER ## Intended uses & limitations Open source ## Training and evaluation data The conll2003 dataset ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0757 | 1.0 | 1756 | 0.0638 | 0.9215 | 0.9362 | 0.9288 | 0.9833 | | 0.0352 | 2.0 | 3512 | 0.0667 | 0.9360 | 0.9482 | 0.9421 | 0.9858 | | 0.0215 | 3.0 | 5268 | 0.0636 | 0.9343 | 0.9498 | 0.9420 | 0.9861 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
RichardErkhov/dhmeltzer_-_llama-7b-SFT_eli5_wiki65k_1024_r_64_alpha_16_merged-4bits
RichardErkhov
2024-05-18T15:35:37Z
78
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-18T15:30:12Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama-7b-SFT_eli5_wiki65k_1024_r_64_alpha_16_merged - bnb 4bits - Model creator: https://huggingface.co/dhmeltzer/ - Original model: https://huggingface.co/dhmeltzer/llama-7b-SFT_eli5_wiki65k_1024_r_64_alpha_16_merged/ Original model description: # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dhmeltzer__llama-7b-SFT_eli5_wiki65k_1024_r_64_alpha_16_merged) | Metric | Value | |-----------------------|---------------------------| | Avg. | 43.96 | | ARC (25-shot) | 53.75 | | HellaSwag (10-shot) | 78.76 | | MMLU (5-shot) | 46.02 | | TruthfulQA (0-shot) | 43.31 | | Winogrande (5-shot) | 73.48 | | GSM8K (5-shot) | 4.7 | | DROP (3-shot) | 7.72 |
AliSaadatV/virus_pythia_31_1024_2d_representation_GaussianPlusCE
AliSaadatV
2024-05-18T15:35:21Z
128
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "generated_from_trainer", "base_model:EleutherAI/pythia-31m", "base_model:finetune:EleutherAI/pythia-31m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-18T15:35:17Z
--- base_model: EleutherAI/pythia-31m tags: - generated_from_trainer model-index: - name: virus_pythia_31_1024_2d_representation_GaussianPlusCE results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # virus_pythia_31_1024_2d_representation_GaussianPlusCE This model is a fine-tuned version of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
vuongnhathien/swin-30vn
vuongnhathien
2024-05-18T15:34:35Z
153
0
transformers
[ "transformers", "tensorboard", "safetensors", "swinv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swinv2-tiny-patch4-window16-256", "base_model:finetune:microsoft/swinv2-tiny-patch4-window16-256", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-18T12:41:10Z
--- license: apache-2.0 base_model: microsoft/swinv2-tiny-patch4-window16-256 tags: - image-classification - generated_from_trainer datasets: - imagefolder model-index: - name: swin-30vn results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-30vn This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window16-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window16-256) on the vuongnhathien/30VNFoods dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 64 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
nkgupta50/ppo-Huggy
nkgupta50
2024-05-18T15:34:16Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-03-20T14:48:26Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: nkgupta50/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
maxosai/Beisen-AI
maxosai
2024-05-18T15:33:50Z
19
0
transformers
[ "transformers", "safetensors", "gguf", "qwen", "feature-extraction", "beisen", "train", "custom_code", "zh", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
feature-extraction
2024-05-18T11:01:02Z
--- license: apache-2.0 language: - zh tags: - beisen - train --- 此模型基于千问微调训练而成,可下载试用。 效果: ![ai-demo-2.PNG](https://cdn-uploads.huggingface.co/production/uploads/6648880ff7e6c2f2900ec87b/saS6Aw7v0fWG4EGgeKEcE.png) ![ai-demo-1.PNG](https://cdn-uploads.huggingface.co/production/uploads/6648880ff7e6c2f2900ec87b/hEjHUYFEfomH1YEey7-vB.png) 注:此模型仅测试而用。
arslan2012/Poppy_Porpoise-0.72-L3-8B-AWQ
arslan2012
2024-05-18T15:33:21Z
82
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "roleplay", "awq", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "region:us" ]
text-generation
2024-05-18T14:04:42Z
--- tags: - roleplay - awq --- > [!TIP] > **Support the Project:** <br> > You can send ETH or any BSC-compatible tokens to the following address: > `0xC37D7670729a5726EA642c7A11C5aaCB36D43dDE` AWQ quants for [ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B). # Original model information by the author: # "Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/_R1hvMt9_QSBRzlnzo_nY.png) # Recomended ST Presets:(Updated for 0.72) [Porpoise Presets](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B/tree/main/Official%20Poppy%20Porpoise%20ST%20Presets) If you want to use vision functionality: * You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp). # To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo. [Llava MMProj](https://huggingface.co/ChaoticNeutrals/LLaVA-Llama-3-8B-mmproj) * You can load the **mmproj** by using the corresponding section in the interface: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UX6Ubss2EPNAT3SKGMLe0.png)
basakdemirok/bert-base-turkish-cased-off_detect_v0123_seed42
basakdemirok
2024-05-18T15:26:23Z
62
0
transformers
[ "transformers", "tf", "tensorboard", "bert", "text-classification", "generated_from_keras_callback", "base_model:dbmdz/bert-base-turkish-cased", "base_model:finetune:dbmdz/bert-base-turkish-cased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-18T14:27:50Z
--- license: mit base_model: dbmdz/bert-base-turkish-cased tags: - generated_from_keras_callback model-index: - name: basakdemirok/bert-base-turkish-cased-off_detect_v0123_seed42 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # basakdemirok/bert-base-turkish-cased-off_detect_v0123_seed42 This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0030 - Validation Loss: 0.8183 - Train F1: 0.6964 - Epoch: 3 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 29136, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train F1 | Epoch | |:----------:|:---------------:|:--------:|:-----:| | 0.1949 | 0.3811 | 0.6818 | 0 | | 0.0313 | 0.6053 | 0.6924 | 1 | | 0.0088 | 0.7740 | 0.7002 | 2 | | 0.0030 | 0.8183 | 0.6964 | 3 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.13.1 - Datasets 2.4.0 - Tokenizers 0.13.3
nsugianto/detr-resnet50_finetuned_detrresnet50_lsdocelementdetv1type7_v2_s2_2117s
nsugianto
2024-05-18T15:24:38Z
36
0
transformers
[ "transformers", "tensorboard", "safetensors", "detr", "object-detection", "generated_from_trainer", "base_model:facebook/detr-resnet-50", "base_model:finetune:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2024-05-18T06:28:09Z
--- license: apache-2.0 base_model: facebook/detr-resnet-50 tags: - generated_from_trainer model-index: - name: detr-resnet50_finetuned_detrresnet50_lsdocelementdetv1type7_v2_s2_2117s results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet50_finetuned_detrresnet50_lsdocelementdetv1type7_v2_s2_2117s This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.0.1 - Datasets 2.18.0 - Tokenizers 0.19.1
nsugianto/detr-resnet50_finetuned_detrresnet50_lsdocelementdetv1type7_s1_2117s
nsugianto
2024-05-18T15:24:32Z
39
0
transformers
[ "transformers", "tensorboard", "safetensors", "detr", "object-detection", "generated_from_trainer", "base_model:facebook/detr-resnet-50", "base_model:finetune:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2024-05-18T06:27:45Z
--- license: apache-2.0 base_model: facebook/detr-resnet-50 tags: - generated_from_trainer model-index: - name: detr-resnet50_finetuned_detrresnet50_lsdocelementdetv1type7_s1_2117s results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet50_finetuned_detrresnet50_lsdocelementdetv1type7_s1_2117s This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.0.1 - Datasets 2.18.0 - Tokenizers 0.19.1
Porameht/bert-intent-customer-support-th
Porameht
2024-05-18T15:24:01Z
109
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "th", "dataset:Porameht/customer-support-th-26.9k", "base_model:google-bert/bert-base-multilingual-cased", "base_model:finetune:google-bert/bert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-07T07:26:24Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: google-bert/bert-base-multilingual-cased metrics: - accuracy - f1 - precision - recall model-index: - name: bert-base-intent-classification-cs-th results: [] datasets: - Porameht/customer-support-th-26.9k language: - th library_name: transformers --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment.--> # bert-base-intent-classification-cs-th This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on an [Porameht/customer-support-th-26.9k](https://huggingface.co/datasets/Porameht/customer-support-th-26.9k) dataset. 🧠 Can understand if any customer wants to cancel an order from a sentence. It achieves the following results on the evaluation set: - Loss: 0.0408 - Accuracy: 0.9936 - F1: 0.9936 - Precision: 0.9937 - Recall: 0.9936 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 3.2835 | 0.0595 | 50 | 3.1041 | 0.1203 | 0.0504 | 0.0632 | 0.1210 | | 2.6752 | 0.1190 | 100 | 1.9646 | 0.5387 | 0.4737 | 0.6298 | 0.5426 | | 1.4751 | 0.1786 | 150 | 0.9447 | 0.8190 | 0.7929 | 0.8271 | 0.8188 | | 0.7571 | 0.2381 | 200 | 0.5163 | 0.8952 | 0.8826 | 0.8812 | 0.8955 | | 0.4849 | 0.2976 | 250 | 0.3539 | 0.9003 | 0.8905 | 0.8926 | 0.9021 | | 0.3401 | 0.3571 | 300 | 0.2883 | 0.9160 | 0.9037 | 0.9012 | 0.9165 | | 0.2533 | 0.4167 | 350 | 0.1735 | 0.9431 | 0.9322 | 0.9266 | 0.9443 | | 0.177 | 0.4762 | 400 | 0.1326 | 0.9665 | 0.9670 | 0.9676 | 0.9671 | | 0.119 | 0.5357 | 450 | 0.1527 | 0.9592 | 0.9582 | 0.9699 | 0.9600 | | 0.1183 | 0.5952 | 500 | 0.0886 | 0.9839 | 0.9841 | 0.9841 | 0.9842 | | 0.1065 | 0.6548 | 550 | 0.0829 | 0.9844 | 0.9844 | 0.9847 | 0.9844 | | 0.1006 | 0.7143 | 600 | 0.0686 | 0.9869 | 0.9869 | 0.9872 | 0.9869 | | 0.1096 | 0.7738 | 650 | 0.1071 | 0.9789 | 0.9791 | 0.9800 | 0.9788 | | 0.1392 | 0.8333 | 700 | 0.0939 | 0.9804 | 0.9804 | 0.9808 | 0.9803 | | 0.1067 | 0.8929 | 750 | 0.1077 | 0.9786 | 0.9790 | 0.9802 | 0.9786 | | 0.0779 | 0.9524 | 800 | 0.0657 | 0.9878 | 0.9878 | 0.9879 | 0.9879 | | 0.0626 | 1.0119 | 850 | 0.0750 | 0.9851 | 0.9853 | 0.9856 | 0.9852 | | 0.0419 | 1.0714 | 900 | 0.0641 | 0.9893 | 0.9893 | 0.9895 | 0.9893 | | 0.0373 | 1.1310 | 950 | 0.0664 | 0.9891 | 0.9891 | 0.9893 | 0.9890 | | 0.035 | 1.1905 | 1000 | 0.0575 | 0.9906 | 0.9906 | 0.9907 | 0.9906 | | 0.036 | 1.25 | 1050 | 0.0601 | 0.9891 | 0.9893 | 0.9895 | 0.9892 | | 0.0765 | 1.3095 | 1100 | 0.0682 | 0.9875 | 0.9875 | 0.9877 | 0.9874 | | 0.0637 | 1.3690 | 1150 | 0.0587 | 0.9906 | 0.9906 | 0.9908 | 0.9906 | | 0.0241 | 1.4286 | 1200 | 0.0528 | 0.9906 | 0.9907 | 0.9909 | 0.9905 | | 0.0608 | 1.4881 | 1250 | 0.0458 | 0.9920 | 0.9920 | 0.9922 | 0.9919 | | 0.0199 | 1.5476 | 1300 | 0.0508 | 0.9914 | 0.9914 | 0.9915 | 0.9914 | | 0.0663 | 1.6071 | 1350 | 0.0461 | 0.9911 | 0.9910 | 0.9911 | 0.9910 | | 0.0495 | 1.6667 | 1400 | 0.0525 | 0.9906 | 0.9907 | 0.9908 | 0.9906 | | 0.0336 | 1.7262 | 1450 | 0.0478 | 0.9915 | 0.9916 | 0.9917 | 0.9915 | | 0.0249 | 1.7857 | 1500 | 0.0578 | 0.9891 | 0.9891 | 0.9892 | 0.9891 | | 0.0287 | 1.8452 | 1550 | 0.0547 | 0.9908 | 0.9908 | 0.9909 | 0.9908 | | 0.0607 | 1.9048 | 1600 | 0.0395 | 0.9929 | 0.9929 | 0.9930 | 0.9928 | | 0.0268 | 1.9643 | 1650 | 0.0529 | 0.9897 | 0.9898 | 0.9902 | 0.9897 | | 0.013 | 2.0238 | 1700 | 0.0455 | 0.9924 | 0.9925 | 0.9926 | 0.9925 | | 0.0106 | 2.0833 | 1750 | 0.0419 | 0.9927 | 0.9928 | 0.9928 | 0.9927 | | 0.007 | 2.1429 | 1800 | 0.0461 | 0.9920 | 0.9920 | 0.9921 | 0.9919 | | 0.0502 | 2.2024 | 1850 | 0.0433 | 0.9929 | 0.9929 | 0.9930 | 0.9929 | | 0.017 | 2.2619 | 1900 | 0.0440 | 0.9926 | 0.9926 | 0.9927 | 0.9926 | | 0.0119 | 2.3214 | 1950 | 0.0403 | 0.9927 | 0.9928 | 0.9928 | 0.9927 | | 0.0063 | 2.3810 | 2000 | 0.0391 | 0.9930 | 0.9930 | 0.9931 | 0.9930 | | 0.0103 | 2.4405 | 2050 | 0.0412 | 0.9929 | 0.9929 | 0.9930 | 0.9929 | | 0.012 | 2.5 | 2100 | 0.0420 | 0.9929 | 0.9929 | 0.9930 | 0.9929 | | 0.0233 | 2.5595 | 2150 | 0.0407 | 0.9927 | 0.9928 | 0.9928 | 0.9928 | | 0.0169 | 2.6190 | 2200 | 0.0397 | 0.9930 | 0.9930 | 0.9931 | 0.9930 | | 0.0281 | 2.6786 | 2250 | 0.0367 | 0.9933 | 0.9933 | 0.9934 | 0.9933 | | 0.0117 | 2.7381 | 2300 | 0.0360 | 0.9933 | 0.9933 | 0.9934 | 0.9933 | | 0.0225 | 2.7976 | 2350 | 0.0354 | 0.9936 | 0.9936 | 0.9937 | 0.9936 | | 0.0078 | 2.8571 | 2400 | 0.0357 | 0.9936 | 0.9936 | 0.9937 | 0.9936 | | 0.0164 | 2.9167 | 2450 | 0.0346 | 0.9939 | 0.9939 | 0.9940 | 0.9939 | | 0.0016 | 2.9762 | 2500 | 0.0345 | 0.9939 | 0.9939 | 0.9940 | 0.9939 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
ingeol/kosaul_sft_v0.2
ingeol
2024-05-18T15:21:58Z
115
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-18T13:50:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SKHIA2024/sifkhenioui
SKHIA2024
2024-05-18T15:16:35Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-04-20T16:04:52Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: sifkhenioui results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
mdosama39/mt0-base-headline-base
mdosama39
2024-05-18T15:15:40Z
6
0
transformers
[ "transformers", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "base_model:bigscience/mt0-base", "base_model:finetune:bigscience/mt0-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-18T15:06:20Z
--- license: apache-2.0 base_model: bigscience/mt0-base tags: - generated_from_trainer metrics: - rouge model-index: - name: mt0-base-headline-base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt0-base-headline-base This model is a fine-tuned version of [bigscience/mt0-base](https://huggingface.co/bigscience/mt0-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6244 - Rouge1: 0.0 - Rouge2: 0.0 - Rougel: 0.0 - Rougelsum: 0.0 - Gen Len: 16.7891 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 1.9733 | 1.0 | 202 | 1.7065 | 0.0 | 0.0 | 0.0 | 0.0 | 16.8759 | | 1.7562 | 2.0 | 404 | 1.6455 | 0.0 | 0.0 | 0.0 | 0.0 | 16.8834 | | 1.387 | 3.0 | 606 | 1.6142 | 0.0 | 0.0 | 0.0 | 0.0 | 16.34 | | 1.584 | 4.0 | 808 | 1.6244 | 0.0 | 0.0 | 0.0 | 0.0 | 16.7891 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
tjasad/lora_fine_tuned_boolq_googlemt_sloberta
tjasad
2024-05-18T15:14:41Z
3
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:EMBEDDIA/sloberta", "base_model:adapter:EMBEDDIA/sloberta", "license:cc-by-sa-4.0", "region:us" ]
null
2024-05-18T15:14:39Z
--- license: cc-by-sa-4.0 library_name: peft tags: - generated_from_trainer base_model: EMBEDDIA/sloberta metrics: - accuracy - f1 model-index: - name: lora_fine_tuned_boolq_googlemt_sloberta results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lora_fine_tuned_boolq_googlemt_sloberta This model is a fine-tuned version of [EMBEDDIA/sloberta](https://huggingface.co/EMBEDDIA/sloberta) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6642 - Accuracy: 0.6217 - F1: 0.4767 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:| | 0.6841 | 0.0424 | 50 | 0.6647 | 0.6217 | 0.4767 | | 0.6685 | 0.0848 | 100 | 0.6632 | 0.6217 | 0.4767 | | 0.6944 | 0.1272 | 150 | 0.6639 | 0.6217 | 0.4767 | | 0.6581 | 0.1696 | 200 | 0.6632 | 0.6217 | 0.4767 | | 0.6625 | 0.2120 | 250 | 0.6642 | 0.6217 | 0.4767 | | 0.6532 | 0.2545 | 300 | 0.6661 | 0.6217 | 0.4767 | | 0.6741 | 0.2969 | 350 | 0.6645 | 0.6217 | 0.4767 | | 0.6852 | 0.3393 | 400 | 0.6642 | 0.6217 | 0.4767 | ### Framework versions - PEFT 0.11.1 - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
OsherElhadad/ppo-PandaReachJointsSparse-v3-1000000
OsherElhadad
2024-05-18T15:12:12Z
2
0
stable-baselines3
[ "stable-baselines3", "PandaReachJointsSparse-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-18T15:08:53Z
--- library_name: stable-baselines3 tags: - PandaReachJointsSparse-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachJointsSparse-v3 type: PandaReachJointsSparse-v3 metrics: - type: mean_reward value: -1.60 +/- 0.80 name: mean_reward verified: false --- # **PPO** Agent playing **PandaReachJointsSparse-v3** This is a trained model of a **PPO** agent playing **PandaReachJointsSparse-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
lora-library/B-LoRA-crayon_drawing
lora-library
2024-05-18T15:08:50Z
10
1
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-18T15:08:32Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A [v48] widget: - text: ' ' output: url: image_0.png --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # 'SDXL B-LoRA - lora-library/B-LoRA-crayon_drawing <Gallery /> ## Model description These are lora-library/B-LoRA-crayon_drawing LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use "A [v48]" to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](lora-library/B-LoRA-crayon_drawing/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
lora-library/B-LoRA-ink_sketch
lora-library
2024-05-18T15:08:24Z
42
5
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-18T15:08:18Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A [v32] widget: - text: ' ' output: url: image_0.png --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # 'SDXL B-LoRA - lora-library/B-LoRA-ink_sketch <Gallery /> ## Model description These are lora-library/B-LoRA-ink_sketch LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use "A [v32]" to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](lora-library/B-LoRA-ink_sketch/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
lora-library/B-LoRA-watercolor
lora-library
2024-05-18T15:08:11Z
77
4
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-18T15:08:06Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A [v17] widget: - text: ' ' output: url: image_0.png --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # 'SDXL B-LoRA - lora-library/B-LoRA-watercolor <Gallery /> ## Model description These are lora-library/B-LoRA-watercolor LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use "A [v17]" to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](lora-library/B-LoRA-watercolor/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
lora-library/B-LoRA-village_oil
lora-library
2024-05-18T15:08:05Z
22
1
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-18T15:08:00Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A [v50] widget: - text: ' ' output: url: image_0.png --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # 'SDXL B-LoRA - lora-library/B-LoRA-village_oil <Gallery /> ## Model description These are lora-library/B-LoRA-village_oil LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use "A [v50]" to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](lora-library/B-LoRA-village_oil/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
lora-library/B-LoRA-drawing1
lora-library
2024-05-18T15:07:59Z
15
1
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-18T15:07:53Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A [v26] widget: - text: ' ' output: url: image_0.png --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # 'SDXL B-LoRA - lora-library/B-LoRA-drawing1 <Gallery /> ## Model description These are lora-library/B-LoRA-drawing1 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use "A [v26]" to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](lora-library/B-LoRA-drawing1/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
lora-library/B-LoRA-cat
lora-library
2024-05-18T15:07:46Z
13
1
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-18T15:07:40Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A [v0] widget: - text: ' ' output: url: image_0.png --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # 'SDXL B-LoRA - lora-library/B-LoRA-cat <Gallery /> ## Model description These are lora-library/B-LoRA-cat LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use "A [v0]" to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](lora-library/B-LoRA-cat/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
lora-library/B-LoRA-fat_bird
lora-library
2024-05-18T15:07:26Z
2
1
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-18T15:07:20Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A [v15] widget: - text: ' ' output: url: image_0.png --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # 'SDXL B-LoRA - lora-library/B-LoRA-fat_bird <Gallery /> ## Model description These are lora-library/B-LoRA-fat_bird LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use "A [v15]" to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](lora-library/B-LoRA-fat_bird/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
lora-library/B-LoRA-buddha
lora-library
2024-05-18T15:07:00Z
5
1
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-18T15:06:55Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A [v16] widget: - text: ' ' output: url: image_0.png --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # 'SDXL B-LoRA - lora-library/B-LoRA-buddha <Gallery /> ## Model description These are lora-library/B-LoRA-buddha LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use "A [v16]" to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](lora-library/B-LoRA-buddha/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
lora-library/B-LoRA-colorful_teapot
lora-library
2024-05-18T15:06:33Z
2
1
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-18T15:06:28Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A [v6] widget: - text: ' ' output: url: image_0.png --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # 'SDXL B-LoRA - lora-library/B-LoRA-colorful_teapot <Gallery /> ## Model description These are lora-library/B-LoRA-colorful_teapot LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use "A [v6]" to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](lora-library/B-LoRA-colorful_teapot/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
lora-library/B-LoRA-statue
lora-library
2024-05-18T15:06:27Z
44
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-18T15:06:19Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A [v20] widget: - text: ' ' output: url: image_0.png --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # 'SDXL B-LoRA - lora-library/B-LoRA-statue <Gallery /> ## Model description These are lora-library/B-LoRA-statue LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use "A [v20]" to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](lora-library/B-LoRA-statue/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
lora-library/B-LoRA-scary_mug
lora-library
2024-05-18T15:06:18Z
4
1
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-18T15:06:10Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A [v10] widget: - text: ' ' output: url: image_0.png --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # 'SDXL B-LoRA - lora-library/B-LoRA-scary_mug <Gallery /> ## Model description These are lora-library/B-LoRA-scary_mug LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use "A [v10]" to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](lora-library/B-LoRA-scary_mug/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
GodsonNtungi/DAD_model_gemma_v3
GodsonNtungi
2024-05-18T15:05:11Z
12
0
transformers
[ "transformers", "gguf", "gemma", "text-generation-inference", "unsloth", "en", "base_model:Mollel/Swahili_Gemma", "base_model:quantized:Mollel/Swahili_Gemma", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-18T15:02:44Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - gguf base_model: Mollel/Swahili_Gemma --- # Uploaded model - **Developed by:** GodsonNtungi - **License:** apache-2.0 - **Finetuned from model :** Mollel/Swahili_Gemma This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
carlesoctav/coba-pth-4
carlesoctav
2024-05-18T15:04:36Z
38
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-18T13:54:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
maneln/llama
maneln
2024-05-18T15:03:39Z
129
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-18T14:34:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
selmamalak/organamnist-deit-base-finetuned
selmamalak
2024-05-18T15:02:35Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "dataset:medmnist-v2", "base_model:facebook/deit-base-patch16-224", "base_model:adapter:facebook/deit-base-patch16-224", "license:apache-2.0", "region:us" ]
null
2024-05-18T13:12:59Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: facebook/deit-base-patch16-224 datasets: - medmnist-v2 metrics: - accuracy - precision - recall - f1 model-index: - name: organamnist-deit-base-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # organamnist-deit-base-finetuned This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the medmnist-v2 dataset. It achieves the following results on the evaluation set: - Loss: 0.1907 - Accuracy: 0.9424 - Precision: 0.9464 - Recall: 0.9395 - F1: 0.9421 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.005 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.5849 | 1.0 | 540 | 0.1842 | 0.9442 | 0.9449 | 0.9268 | 0.9285 | | 0.6494 | 2.0 | 1081 | 0.1433 | 0.9499 | 0.9539 | 0.9510 | 0.9509 | | 0.6059 | 3.0 | 1621 | 0.1171 | 0.9562 | 0.9659 | 0.9569 | 0.9593 | | 0.3547 | 4.0 | 2162 | 0.0981 | 0.9666 | 0.9709 | 0.9712 | 0.9702 | | 0.4852 | 5.0 | 2702 | 0.0539 | 0.9817 | 0.9848 | 0.9842 | 0.9842 | | 0.406 | 6.0 | 3243 | 0.0818 | 0.9749 | 0.9793 | 0.9752 | 0.9768 | | 0.3074 | 7.0 | 3783 | 0.1289 | 0.9666 | 0.9815 | 0.9778 | 0.9783 | | 0.2679 | 8.0 | 4324 | 0.0311 | 0.9900 | 0.9916 | 0.9909 | 0.9912 | | 0.2439 | 9.0 | 4864 | 0.0577 | 0.9851 | 0.9886 | 0.9880 | 0.9881 | | 0.2169 | 9.99 | 5400 | 0.0720 | 0.9835 | 0.9888 | 0.9882 | 0.9882 | ### Framework versions - PEFT 0.11.1 - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
Ransss/flammen24X-mistral-7B-Q8_0-GGUF
Ransss
2024-05-18T15:00:42Z
1
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:KatyTheCutie/LemonadeRP-4.5.3", "base_model:merge:KatyTheCutie/LemonadeRP-4.5.3", "base_model:Nitral-AI/Nyanade_Stunna-Maid-7B", "base_model:merge:Nitral-AI/Nyanade_Stunna-Maid-7B", "base_model:cgato/TheSpice-7b-v0.1.1", "base_model:merge:cgato/TheSpice-7b-v0.1.1", "base_model:flammenai/Mahou-1.1-mistral-7B", "base_model:merge:flammenai/Mahou-1.1-mistral-7B", "base_model:flammenai/flammen24-mistral-7B", "base_model:merge:flammenai/flammen24-mistral-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-18T15:00:20Z
--- license: apache-2.0 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo base_model: - Nitral-AI/Nyanade_Stunna-Maid-7B - flammenai/flammen24-mistral-7B - cgato/TheSpice-7b-v0.1.1 - flammenai/Mahou-1.1-mistral-7B - KatyTheCutie/LemonadeRP-4.5.3 --- # Ransss/flammen24X-mistral-7B-Q8_0-GGUF This model was converted to GGUF format from [`flammenai/flammen24X-mistral-7B`](https://huggingface.co/flammenai/flammen24X-mistral-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/flammenai/flammen24X-mistral-7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo Ransss/flammen24X-mistral-7B-Q8_0-GGUF --model flammen24x-mistral-7b.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo Ransss/flammen24X-mistral-7B-Q8_0-GGUF --model flammen24x-mistral-7b.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m flammen24x-mistral-7b.Q8_0.gguf -n 128 ```
Edgar-00/Models-BERT-1716041270.498132
Edgar-00
2024-05-18T14:59:02Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-18T14:09:47Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: Models-BERT-1716041270.498132 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Models-BERT-1716041270.498132 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7359 - Accuracy: 0.784 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6267 | 1.0 | 2455 | 0.5639 | 0.766 | | 0.4036 | 2.0 | 4910 | 0.6091 | 0.782 | | 0.259 | 3.0 | 7365 | 0.7359 | 0.784 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Rhma/MistralaDialo5
Rhma
2024-05-18T14:56:05Z
9
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-18T14:52:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MrezaPRZ/codellama_synthetic_gretel
MrezaPRZ
2024-05-18T14:55:59Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-18T14:53:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rafaelsandroni/lora-adapter-for-llama-3-8b-Instruct
rafaelsandroni
2024-05-18T14:55:48Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-18T14:55:45Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** rafaelsandroni - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Aurelia25/Smile_Twitter_Sentiment_Analysis
Aurelia25
2024-05-18T14:55:02Z
92
0
transformers
[ "transformers", "safetensors", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-18T14:46:10Z
This directory includes a few sample datasets to get you started. * `california_housing_data*.csv` is California housing data from the 1990 US Census; more information is available at: https://developers.google.com/machine-learning/crash-course/california-housing-data-description * `mnist_*.csv` is a small sample of the [MNIST database](https://en.wikipedia.org/wiki/MNIST_database), which is described at: http://yann.lecun.com/exdb/mnist/ * `anscombe.json` contains a copy of [Anscombe's quartet](https://en.wikipedia.org/wiki/Anscombe%27s_quartet); it was originally described in Anscombe, F. J. (1973). 'Graphs in Statistical Analysis'. American Statistician. 27 (1): 17-21. JSTOR 2682899. and our copy was prepared by the [vega_datasets library](https://github.com/altair-viz/vega_datasets/blob/4f67bdaad10f45e3549984e17e1b3088c731503d/vega_datasets/_data/anscombe.json).
Dandan0K/Pilot_vox_Ref_french
Dandan0K
2024-05-18T14:55:00Z
79
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "fr", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-18T13:52:46Z
--- language: - fr license: apache-2.0 tags: - automatic-speech-recognition - fr datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_fr_vp-100k_s973 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
SamirLahouar/Reinforce-unit4
SamirLahouar
2024-05-18T14:53:09Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-05-18T14:52:59Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-unit4 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
AliSaadatV/virus_pythia_14_1024_2d_representation_MSEPlusCE
AliSaadatV
2024-05-18T14:52:48Z
128
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "generated_from_trainer", "base_model:EleutherAI/pythia-14m", "base_model:finetune:EleutherAI/pythia-14m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-18T14:52:47Z
--- base_model: EleutherAI/pythia-14m tags: - generated_from_trainer model-index: - name: virus_pythia_14_1024_2d_representation_MSEPlusCE results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # virus_pythia_14_1024_2d_representation_MSEPlusCE This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Edgar404/a2c-PandaPickAndPlace-v3
Edgar404
2024-05-18T14:51:17Z
1
0
stable-baselines3
[ "stable-baselines3", "PandaPickAndPlace-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-18T14:46:40Z
--- library_name: stable-baselines3 tags: - PandaPickAndPlace-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaPickAndPlace-v3 type: PandaPickAndPlace-v3 metrics: - type: mean_reward value: -50.00 +/- 0.00 name: mean_reward verified: false --- # **A2C** Agent playing **PandaPickAndPlace-v3** This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
rnribeiro/FT-ProsusAI-finbert
rnribeiro
2024-05-18T14:51:00Z
108
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:ProsusAI/finbert", "base_model:finetune:ProsusAI/finbert", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-18T13:16:46Z
--- base_model: ProsusAI/finbert tags: - generated_from_trainer metrics: - accuracy model-index: - name: FT-ProsusAI-finbert results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # FT-ProsusAI-finbert This model is a fine-tuned version of [ProsusAI/finbert](https://huggingface.co/ProsusAI/finbert) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3622 - Accuracy: 0.85 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.3829 | 0.85 | | No log | 2.0 | 80 | 0.3999 | 0.825 | | No log | 3.0 | 120 | 0.3622 | 0.85 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu118 - Datasets 2.18.0 - Tokenizers 0.15.2
DownwardSpiral33/gpt2-imdb-pos-v2-003
DownwardSpiral33
2024-05-18T14:50:46Z
130
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-18T14:50:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rnribeiro/FT-distilbert-base-uncased
rnribeiro
2024-05-18T14:50:17Z
119
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-18T13:16:45Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: FT-distilbert-base-uncased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # FT-distilbert-base-uncased This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6614 - Accuracy: 0.65 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.6806 | 0.5 | | No log | 2.0 | 80 | 0.6614 | 0.65 | | No log | 3.0 | 120 | 0.6672 | 0.55 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu118 - Datasets 2.18.0 - Tokenizers 0.15.2
rnribeiro/FT-mrm8488-distilroberta-finetuned-financial-news-sentiment-analysis
rnribeiro
2024-05-18T14:50:14Z
111
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis", "base_model:finetune:mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-18T13:16:44Z
--- license: apache-2.0 base_model: mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis tags: - generated_from_trainer metrics: - accuracy model-index: - name: FT-mrm8488-distilroberta-finetuned-financial-news-sentiment-analysis results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # FT-mrm8488-distilroberta-finetuned-financial-news-sentiment-analysis This model is a fine-tuned version of [mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis](https://huggingface.co/mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2034 - Accuracy: 0.95 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.2034 | 0.95 | | No log | 2.0 | 80 | 0.2108 | 0.925 | | No log | 3.0 | 120 | 0.2077 | 0.95 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu118 - Datasets 2.18.0 - Tokenizers 0.15.2
Yann2310/Reinforce
Yann2310
2024-05-18T14:49:24Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-05-18T14:49:22Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 17.30 +/- 5.37 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
svjack/emoji_ORPO_Mistral7B_v2_lora
svjack
2024-05-18T14:47:03Z
3
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "zh", "base_model:mistral-community/Mistral-7B-v0.2", "base_model:adapter:mistral-community/Mistral-7B-v0.2", "license:other", "region:us" ]
null
2024-05-13T10:50:08Z
--- license: other library_name: peft tags: - llama-factory - lora - generated_from_trainer base_model: alpindale/Mistral-7B-v0.2-hf model-index: - name: train_2024-05-13-15-43-20 results: [] language: - zh --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Install ```bash pip install peft transformers bitsandbytes ``` # Run by transformers ```python from transformers import TextStreamer, AutoTokenizer, AutoModelForCausalLM from peft import PeftModel tokenizer = AutoTokenizer.from_pretrained("alpindale/Mistral-7B-v0.2-hf",) mis_model = AutoModelForCausalLM.from_pretrained("alpindale/Mistral-7B-v0.2-hf", load_in_4bit = True) mis_model = PeftModel.from_pretrained(mis_model, "svjack/emoji_ORPO_Mistral7B_v2_lora") mis_model = mis_model.eval() streamer = TextStreamer(tokenizer) def mistral_hf_predict(prompt, mis_model = mis_model, tokenizer = tokenizer, streamer = streamer, do_sample = True, top_p = 0.95, top_k = 40, max_new_tokens = 512, max_input_length = 3500, temperature = 0.9, device = "cuda"): messages = [ {"role": "user", "content": prompt[:max_input_length]} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) generated_ids = mis_model.generate(model_inputs, max_new_tokens=max_new_tokens, do_sample=do_sample, streamer = streamer, top_p = top_p, top_k = top_k, temperature = temperature, ) out = tokenizer.batch_decode(generated_ids)[0].split("[/INST]")[-1].replace("</s>", "").strip() return out out = mistral_hf_predict("你是谁?") print(out) ``` # Output ```txt 嘻嘻!我是中国的朋友 😊,我是一个热情的、有趣的、笑颜的中国人! 我们中国人很热情,喜欢大声地说话和喝杯水 🥛,我们喜欢喝茶 🍵, 啥时候都可以喝茶!我们喜欢吃饭 🍟,喝酒 🥂,和朋友们聊天 💬, 我们真的很开朗和乐观 😊! ``` # train_2024-05-13-15-43-20 This model is a fine-tuned version of [alpindale/Mistral-7B-v0.2-hf](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf) on the dpo_zh_emoji_rj_en dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Astowny/Reinforce-cartpool
Astowny
2024-05-18T14:45:23Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-05-18T14:45:15Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-cartpool results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 451.10 +/- 146.70 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
RichardErkhov/guardrail_-_llama-2-7b-guanaco-instruct-sharded-4bits
RichardErkhov
2024-05-18T14:43:43Z
78
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-18T14:38:13Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama-2-7b-guanaco-instruct-sharded - bnb 4bits - Model creator: https://huggingface.co/guardrail/ - Original model: https://huggingface.co/guardrail/llama-2-7b-guanaco-instruct-sharded/ Original model description: --- license: apache-2.0 datasets: - timdettmers/openassistant-guanaco pipeline_tag: text-generation --- Model that is fine-tuned in 4-bit precision using QLoRA on [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) and sharded to be used on a free Google Colab instance that can be loaded with 4bits. It can be easily imported using the `AutoModelForCausalLM` class from `transformers`: ``` from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained( "guardrail/llama-2-7b-guanaco-instruct-sharded", load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) ```
Janet123/Breeze-7B-Instruct-v0_1_qlora
Janet123
2024-05-18T14:42:27Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-18T14:39:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Kha37lid/autotrain-e6fy8-wru1q
Kha37lid
2024-05-18T14:42:00Z
11
0
diffusers
[ "diffusers", "autotrain", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-18T14:41:52Z
--- tags: - autotrain - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: <sks man> license: openrail++ --- # AutoTrain SDXL LoRA DreamBooth - Kha37lid/autotrain-e6fy8-wru1q <Gallery /> ## Model description These are Kha37lid/autotrain-e6fy8-wru1q LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: None. ## Trigger words You should use <sks man> to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](Kha37lid/autotrain-e6fy8-wru1q/tree/main) them in the Files & versions tab.
ethan-ky/distilbert-base-uncased-finetuned-emotion
ethan-ky
2024-05-18T14:41:29Z
119
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-17T03:36:33Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9215 - name: F1 type: f1 value: 0.9213719420412787 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2083 - Accuracy: 0.9215 - F1: 0.9214 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8068 | 1.0 | 250 | 0.2897 | 0.9155 | 0.9148 | | 0.2389 | 2.0 | 500 | 0.2083 | 0.9215 | 0.9214 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.3.0 - Datasets 2.19.1 - Tokenizers 0.19.1
jerryjiao198/Marcoro14-7B-slerp
jerryjiao198
2024-05-18T14:40:28Z
0
0
null
[ "merge", "mergekit", "lazymergekit", "AIDC-ai-business/Marcoroni-7B-v3", "EmbeddedLLM/Mistral-7B-Merge-14-v0.1", "license:apache-2.0", "region:us" ]
null
2024-05-16T02:58:43Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - AIDC-ai-business/Marcoroni-7B-v3 - EmbeddedLLM/Mistral-7B-Merge-14-v0.1 --- # Marcoro14-7B-slerp Marcoro14-7B-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [AIDC-ai-business/Marcoroni-7B-v3](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3) * [EmbeddedLLM/Mistral-7B-Merge-14-v0.1](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.1) ## 🧩 Configuration ```yaml slices: - sources: - model: AIDC-ai-business/Marcoroni-7B-v3 layer_range: [0, 32] - model: EmbeddedLLM/Mistral-7B-Merge-14-v0.1 layer_range: [0, 32] merge_method: slerp base_model: AIDC-ai-business/Marcoroni-7B-v3 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
aldjia/Pixelcopter-PLE-v0
aldjia
2024-05-18T14:39:09Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-05-18T14:38:59Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Schadic/Reinforce-1
Schadic
2024-05-18T14:38:28Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-05-18T14:38:19Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 497.60 +/- 7.20 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Ransss/Fimbulvetr-10.7B-v1-Q8_0-GGUF
Ransss
2024-05-18T14:36:04Z
6
1
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-05-18T14:35:35Z
--- language: - en license: cc-by-nc-4.0 tags: - llama-cpp - gguf-my-repo --- # Ransss/Fimbulvetr-10.7B-v1-Q8_0-GGUF This model was converted to GGUF format from [`Sao10K/Fimbulvetr-10.7B-v1`](https://huggingface.co/Sao10K/Fimbulvetr-10.7B-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Sao10K/Fimbulvetr-10.7B-v1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo Ransss/Fimbulvetr-10.7B-v1-Q8_0-GGUF --model fimbulvetr-10.7b-v1.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo Ransss/Fimbulvetr-10.7B-v1-Q8_0-GGUF --model fimbulvetr-10.7b-v1.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m fimbulvetr-10.7b-v1.Q8_0.gguf -n 128 ```
ucla-nb-project/bart-finetuned
ucla-nb-project
2024-05-18T14:29:52Z
16
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "dataset:datasets/all_binary_and_xe_ey_fae_counterfactual", "base_model:facebook/bart-base", "base_model:finetune:facebook/bart-base", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-18T10:12:35Z
--- base_model: facebook/bart-base tags: - generated_from_trainer datasets: - datasets/all_binary_and_xe_ey_fae_counterfactual metrics: - accuracy model-index: - name: bart-base-finetuned-xe_ey_fae results: - task: name: Masked Language Modeling type: fill-mask dataset: name: datasets/all_binary_and_xe_ey_fae_counterfactual type: datasets/all_binary_and_xe_ey_fae_counterfactual metrics: - name: Accuracy type: accuracy value: 0.7180178883360112 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-finetuned-xe_ey_fae This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the datasets/all_binary_and_xe_ey_fae_counterfactual dataset. It achieves the following results on the evaluation set: - Loss: 1.3945 - Accuracy: 0.7180 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 100 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 5.4226 | 0.06 | 500 | 3.8138 | 0.3628 | | 4.0408 | 0.12 | 1000 | 3.0576 | 0.4630 | | 3.4979 | 0.18 | 1500 | 2.7016 | 0.5133 | | 3.1691 | 0.24 | 2000 | 2.4880 | 0.5431 | | 2.9564 | 0.3 | 2500 | 2.3309 | 0.5644 | | 2.8078 | 0.35 | 3000 | 2.2320 | 0.5792 | | 2.6741 | 0.41 | 3500 | 2.1506 | 0.5924 | | 2.5323 | 0.47 | 4000 | 1.9846 | 0.6176 | | 2.3678 | 0.53 | 4500 | 1.8813 | 0.6375 | | 2.25 | 0.59 | 5000 | 1.8100 | 0.6497 | | 2.1795 | 0.65 | 5500 | 1.7632 | 0.6579 | | 2.1203 | 0.71 | 6000 | 1.7238 | 0.6646 | | 2.0764 | 0.77 | 6500 | 1.6856 | 0.6713 | | 2.026 | 0.83 | 7000 | 1.6569 | 0.6760 | | 1.9942 | 0.89 | 7500 | 1.6309 | 0.6803 | | 1.9665 | 0.95 | 8000 | 1.6122 | 0.6836 | | 1.9395 | 1.0 | 8500 | 1.5913 | 0.6866 | | 1.9155 | 1.06 | 9000 | 1.5758 | 0.6895 | | 1.8828 | 1.12 | 9500 | 1.5607 | 0.6918 | | 1.8721 | 1.18 | 10000 | 1.5422 | 0.6948 | | 1.8474 | 1.24 | 10500 | 1.5320 | 0.6964 | | 1.8293 | 1.3 | 11000 | 1.5214 | 0.6978 | | 1.8129 | 1.36 | 11500 | 1.5102 | 0.6998 | | 1.8148 | 1.42 | 12000 | 1.5010 | 0.7013 | | 1.7903 | 1.48 | 12500 | 1.4844 | 0.7038 | | 1.7815 | 1.54 | 13000 | 1.4823 | 0.7039 | | 1.7637 | 1.6 | 13500 | 1.4746 | 0.7052 | | 1.7623 | 1.66 | 14000 | 1.4701 | 0.7061 | | 1.7402 | 1.71 | 14500 | 1.4598 | 0.7076 | | 1.7376 | 1.77 | 15000 | 1.4519 | 0.7090 | | 1.7287 | 1.83 | 15500 | 1.4501 | 0.7101 | | 1.7273 | 1.89 | 16000 | 1.4409 | 0.7107 | | 1.7119 | 1.95 | 16500 | 1.4314 | 0.7125 | | 1.7098 | 2.01 | 17000 | 1.4269 | 0.7129 | | 1.6978 | 2.07 | 17500 | 1.4275 | 0.7132 | | 1.698 | 2.13 | 18000 | 1.4218 | 0.7140 | | 1.6837 | 2.19 | 18500 | 1.4151 | 0.7147 | | 1.6908 | 2.25 | 19000 | 1.4137 | 0.7149 | | 1.6902 | 2.31 | 19500 | 1.4085 | 0.7161 | | 1.6741 | 2.36 | 20000 | 1.4121 | 0.7154 | | 1.6823 | 2.42 | 20500 | 1.4037 | 0.7165 | | 1.6692 | 2.48 | 21000 | 1.4039 | 0.7164 | | 1.6669 | 2.54 | 21500 | 1.4015 | 0.7172 | | 1.6613 | 2.6 | 22000 | 1.3979 | 0.7179 | | 1.664 | 2.66 | 22500 | 1.3960 | 0.7180 | | 1.6615 | 2.72 | 23000 | 1.4012 | 0.7172 | | 1.6627 | 2.78 | 23500 | 1.3974 | 0.7178 | | 1.6489 | 2.84 | 24000 | 1.3948 | 0.7182 | | 1.6429 | 2.9 | 24500 | 1.3921 | 0.7184 | | 1.6477 | 2.96 | 25000 | 1.3910 | 0.7182 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
TIGER-Lab/Mantis-llava-7b
TIGER-Lab
2024-05-18T14:29:50Z
9
15
transformers
[ "transformers", "safetensors", "llava", "image-text-to-text", "Mantis", "VLM", "LMM", "Multimodal LLM", "en", "base_model:llava-hf/llava-1.5-7b-hf", "base_model:finetune:llava-hf/llava-1.5-7b-hf", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-04-13T19:19:14Z
--- tags: - Mantis - VLM - LMM - Multimodal LLM - llava base_model: llava-hf/llava-1.5-7b-hf model-index: - name: Mantis-llava-7b results: [] license: apache-2.0 language: - en --- # Mantis: Interleaved Multi-Image Instruction Tuning (Deprecated) **Mantis** is a multimodal conversational AI model that can chat with users about images and text. It's optimized for multi-image reasoning, where interleaved text and images can be used to generate responses. **Note that this is an older version of Mantis**, please refer to our newest version at [mantis-Siglip-llama3](https://huggingface.co/TIGER-Lab/Mantis-8B-siglip-llama3). The newer version improves significantly over both multi-image and single-image tasks. Mantis is trained on the newly curated dataset **Mantis-Instruct**, a large-scale multi-image QA dataset that covers various multi-image reasoning tasks. |[Demo](https://huggingface.co/spaces/TIGER-Lab/Mantis) | [Github](https://github.com/TIGER-AI-Lab/Mantis) | [Models](https://huggingface.co/collections/TIGER-Lab/mantis-6619b0834594c878cdb1d6e4) | ![Mantis](https://raw.githubusercontent.com/TIGER-AI-Lab/Mantis/main/docs/assets/images/overall_barchart.jpeg) ## Inference You can install Mantis's GitHub codes as a Python package ```bash pip install git+https://github.com/TIGER-AI-Lab/Mantis.git ``` then run inference with codes here: [examples/run_mantis.py](https://github.com/TIGER-AI-Lab/Mantis/blob/main/examples/run_mantis_hf.py) ```python from mantis.models.mllava import chat_mllava from PIL import Image import torch image1 = "image1.jpg" image2 = "image2.jpg" images = [Image.open(image1), Image.open(image2)] # load processor and model from mantis.models.mllava import MLlavaProcessor, LlavaForConditionalGeneration processor = MLlavaProcessor.from_pretrained("TIGER-Lab/Mantis-bakllava-7b") model = LlavaForConditionalGeneration.from_pretrained("TIGER-Lab/Mantis-bakllava-7b", device_map="auto", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2") # chat text = "<image> <image> What's the difference between these two images? Please describe as much as you can." response, history = chat_mllava(text, images, model, processor) print("USER: ", text) print("ASSISTANT: ", response) # The image on the right has a larger number of wallets displayed compared to the image on the left. The wallets in the right image are arranged in a grid pattern, while the wallets in the left image are displayed in a more scattered manner. The wallets in the right image have various colors, including red, purple, and brown, while the wallets in the left image are primarily brown. text = "How many items are there in image 1 and image 2 respectively?" response, history = chat_mllava(text, images, model, processor, history=history) print("USER: ", text) print("ASSISTANT: ", response) # There are two items in image 1 and four items in image 2. ``` Or, you can run the model without relying on the mantis codes, using pure hugging face transformers. See [examples/run_mantis_hf.py](https://github.com/TIGER-AI-Lab/Mantis/blob/main/examples/run_mantis_hf.py) for details. ## Training Training codes will be released soon.
aertsimon90/Thuner24
aertsimon90
2024-05-18T14:27:55Z
0
0
transformers
[ "transformers", "text-generation", "tr", "en", "de", "it", "ru", "ar", "dataset:open-llm-leaderboard/details_mistralai__Mistral-7B-Instruct-v0.2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2024-05-18T14:22:24Z
--- license: apache-2.0 pipeline_tag: text-generation datasets: - open-llm-leaderboard/details_mistralai__Mistral-7B-Instruct-v0.2 language: - tr - en - de - it - ru - ar metrics: - character library_name: transformers ---
beimu/model
beimu
2024-05-18T14:23:17Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-18T14:23:17Z
--- license: apache-2.0 ---
AliSaadatV/virus_pythia_14_1024_2d_representation_GaussianPlusCE
AliSaadatV
2024-05-18T14:22:18Z
130
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "generated_from_trainer", "base_model:EleutherAI/pythia-14m", "base_model:finetune:EleutherAI/pythia-14m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-18T14:22:16Z
--- base_model: EleutherAI/pythia-14m tags: - generated_from_trainer model-index: - name: virus_pythia_14_1024_2d_representation_GaussianPlusCE results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # virus_pythia_14_1024_2d_representation_GaussianPlusCE This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Fariha4185/bart-large-mnli-samsum
Fariha4185
2024-05-18T14:20:13Z
113
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-large-mnli", "base_model:finetune:facebook/bart-large-mnli", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-16T12:25:21Z
--- license: mit base_model: facebook/bart-large-mnli tags: - generated_from_trainer model-index: - name: bart-large-mnli-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-mnli-samsum This model is a fine-tuned version of [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5107 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.4099 | 0.5431 | 500 | 1.5107 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
chillies/llama-3-8b-student-mental-health-chat-q4
chillies
2024-05-18T14:18:55Z
10
2
transformers
[ "transformers", "gguf", "llama", "psychology", "mental-health", "en", "vi", "dataset:chillies/student-mental-health-chat-data-v2", "endpoints_compatible", "region:us" ]
null
2024-05-05T05:01:30Z
--- datasets: - chillies/student-mental-health-chat-data-v2 language: - en - vi tags: - psychology - mental-health ---
EssalhiSara/gpt2-french-corpus
EssalhiSara
2024-05-18T14:15:16Z
130
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-18T14:15:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ashishkgpian/1k_800merged_model
ashishkgpian
2024-05-18T14:12:05Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-18T14:11:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Dandan0K/Pilot_vox_Ref_italian
Dandan0K
2024-05-18T14:07:44Z
78
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "it", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-18T14:00:14Z
--- language: - it license: apache-2.0 tags: - automatic-speech-recognition - it datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_it_vp-100k_s449 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
alexandro767/stable-diffusion-v1-5-finetuned_5e_r8_v1
alexandro767
2024-05-18T14:03:54Z
29
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-05-18T14:00:56Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
selmamalak/organcmnist-swin-base-finetuned
selmamalak
2024-05-18T14:00:55Z
8
0
peft
[ "peft", "safetensors", "generated_from_trainer", "dataset:medmnist-v2", "base_model:microsoft/swin-large-patch4-window7-224-in22k", "base_model:adapter:microsoft/swin-large-patch4-window7-224-in22k", "license:apache-2.0", "region:us" ]
null
2024-05-18T13:03:26Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: microsoft/swin-large-patch4-window7-224-in22k datasets: - medmnist-v2 metrics: - accuracy - precision - recall - f1 model-index: - name: organcmnist-swin-base-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # organcmnist-swin-base-finetuned This model is a fine-tuned version of [microsoft/swin-large-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-large-patch4-window7-224-in22k) on the medmnist-v2 dataset. It achieves the following results on the evaluation set: - Loss: 0.2582 - Accuracy: 0.9317 - Precision: 0.9295 - Recall: 0.9177 - F1: 0.9229 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.005 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.7563 | 0.9988 | 203 | 0.1859 | 0.9365 | 0.9432 | 0.9127 | 0.9201 | | 0.6145 | 1.9975 | 406 | 0.1260 | 0.9640 | 0.9630 | 0.9608 | 0.9600 | | 0.6476 | 2.9963 | 609 | 0.0926 | 0.9774 | 0.9715 | 0.9754 | 0.9723 | | 0.5719 | 4.0 | 813 | 0.0912 | 0.9770 | 0.9749 | 0.9746 | 0.9740 | | 0.5374 | 4.9988 | 1016 | 0.1281 | 0.9695 | 0.9730 | 0.9690 | 0.9699 | | 0.5615 | 5.9975 | 1219 | 0.1088 | 0.9791 | 0.9839 | 0.9819 | 0.9825 | | 0.4959 | 6.9963 | 1422 | 0.1134 | 0.9741 | 0.9812 | 0.9742 | 0.9768 | | 0.425 | 8.0 | 1626 | 0.1016 | 0.9808 | 0.9816 | 0.9820 | 0.9815 | | 0.3151 | 8.9988 | 1829 | 0.1368 | 0.9804 | 0.9843 | 0.9832 | 0.9834 | | 0.3347 | 9.9877 | 2030 | 0.1156 | 0.9837 | 0.9853 | 0.9864 | 0.9856 | ### Framework versions - PEFT 0.11.1 - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
stablediffusionapi/analog-madness-v70
stablediffusionapi
2024-05-18T13:59:25Z
29
0
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-05-18T13:57:23Z
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # Analog Madness v7.0 API Inference ![generated from modelslab.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/629076491716040429.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "analog-madness-v70" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com) Try model for free: [Generate Images](https://modelslab.com/models/analog-madness-v70) Model link: [View model](https://modelslab.com/models/analog-madness-v70) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "analog-madness-v70", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
deepnet/SN9-BestLlama3
deepnet
2024-05-18T13:59:17Z
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-17T08:59:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
theosun/gemma-2b-it-sharegpt-full
theosun
2024-05-18T13:58:13Z
6
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-18T13:49:16Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
OmoDee/Abi
OmoDee
2024-05-18T13:56:26Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-18T13:56:26Z
--- license: apache-2.0 ---
PaulR79/mistral_finetuned_synthetic
PaulR79
2024-05-18T13:54:52Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-18T13:54:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
WbjuSrceu/model8blora
WbjuSrceu
2024-05-18T13:52:54Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-18T13:52:08Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** WbjuSrceu - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
HariprasathSB/whispeeerrr
HariprasathSB
2024-05-18T13:52:46Z
87
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:HariprasathSB/whispeerr", "base_model:finetune:HariprasathSB/whispeerr", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-18T13:26:38Z
--- license: apache-2.0 base_model: HariprasathSB/whispeerr tags: - generated_from_trainer model-index: - name: whispeeerrr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whispeeerrr This model is a fine-tuned version of [HariprasathSB/whispeerr](https://huggingface.co/HariprasathSB/whispeerr) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.003 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - training_steps: 100 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.41.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
fzzhang/mistralv1_dora_r4_25e5_e05_merged
fzzhang
2024-05-18T13:52:03Z
7
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-18T13:47:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
stablediffusionapi/absolutereality-v181
stablediffusionapi
2024-05-18T13:50:35Z
241
2
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-05-18T13:48:22Z
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # AbsoluteReality v1.8.1 API Inference ![generated from modelslab.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/11161561981716040047.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "absolutereality-v181" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com) Try model for free: [Generate Images](https://modelslab.com/models/absolutereality-v181) Model link: [View model](https://modelslab.com/models/absolutereality-v181) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "absolutereality-v181", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
stablediffusionapi/cetus-mix-v4
stablediffusionapi
2024-05-18T13:47:33Z
29
0
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-05-18T13:45:38Z
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # Cetus-Mix v4 API Inference ![generated from modelslab.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/12849060001716039847.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "cetus-mix-v4" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com) Try model for free: [Generate Images](https://modelslab.com/models/cetus-mix-v4) Model link: [View model](https://modelslab.com/models/cetus-mix-v4) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "cetus-mix-v4", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
carlesoctav/coba-pth-2
carlesoctav
2024-05-18T13:47:19Z
37
0
transformers
[ "transformers", "safetensors", "bert", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-18T13:38:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fzzhang/mistralv1_dora_r4_25e5_e05
fzzhang
2024-05-18T13:47:16Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-05-18T13:47:14Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: mistralai/Mistral-7B-v0.1 model-index: - name: mistralv1_dora_r4_25e5_e05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistralv1_dora_r4_25e5_e05 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.38.2 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
stablediffusionapi/yesmix-v40
stablediffusionapi
2024-05-18T13:46:34Z
29
1
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-05-18T13:44:18Z
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # YesMix v4.0 API Inference ![generated from modelslab.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/8589197591716038433.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "yesmix-v40" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com) Try model for free: [Generate Images](https://modelslab.com/models/yesmix-v40) Model link: [View model](https://modelslab.com/models/yesmix-v40) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "yesmix-v40", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**