modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-14 00:44:55
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
519 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-14 00:44:41
card
stringlengths
11
1.01M
mahdin70/CodeBERT-VulnCWE
mahdin70
2025-04-22T12:01:18Z
0
0
transformers
[ "transformers", "safetensors", "multi_task_codebert", "feature-extraction", "custom_code", "dataset:mahdin70/cwe_enriched_balanced_bigvul_primevul", "base_model:microsoft/codebert-base", "base_model:finetune:microsoft/codebert-base", "license:mit", "region:us" ]
feature-extraction
2025-04-22T11:28:49Z
--- license: mit datasets: - mahdin70/cwe_enriched_balanced_bigvul_primevul metrics: - accuracy - precision - recall - f1 base_model: - microsoft/codebert-base library_name: transformers --- # CodeBERT-VulnCWE - Fine-Tuned CodeBERT for Vulnerability and CWE Classification ## Model Overview This model is a fine-tuned version of **microsoft/codebert-base** on a curated and enriched dataset for vulnerability detection and CWE classification. It is capable of predicting whether a given code snippet is vulnerable and, if vulnerable, identifying the specific CWE ID associated with it. ## Dataset The model was fine-tuned using the dataset [mahdin70/cwe_enriched_balanced_bigvul_primevul](https://huggingface.co/datasets/mahdin70/cwe_enriched_balanced_bigvul_primevul). The dataset contains both vulnerable and non-vulnerable code samples and is enriched with CWE metadata. ### CWE IDs Covered: 1. **CWE-119**: Improper Restriction of Operations within the Bounds of a Memory Buffer 2. **CWE-20**: Improper Input Validation 3. **CWE-125**: Out-of-bounds Read 4. **CWE-399**: Resource Management Errors 5. **CWE-200**: Information Exposure 6. **CWE-787**: Out-of-bounds Write 7. **CWE-264**: Permissions, Privileges, and Access Controls 8. **CWE-416**: Use After Free 9. **CWE-476**: NULL Pointer Dereference 10. **CWE-190**: Integer Overflow or Wraparound 11. **CWE-189**: Numeric Errors 12. **CWE-362**: Concurrent Execution using Shared Resource with Improper Synchronization --- ## Model Training The model was trained for **3 epochs** with the following configuration: - **Learning Rate**: 2e-5 - **Weight Decay**: 0.01 - **Batch Size**: 8 - **Optimizer**: AdamW - **Scheduler**: Linear ### Training Loss and Validation Metrics Per Epoch: | Epoch | Training Loss | Validation Loss | Vul Accuracy | Vul Precision | Vul Recall | Vul F1 | CWE Accuracy | |-------|---------------|-----------------|--------------|---------------|------------|--------|--------------| | 1 | 1.4663 | 1.4988 | 0.7887 | 0.8526 | 0.5498 | 0.6685 | 0.2932 | | 2 | 1.2107 | 1.3474 | 0.8038 | 0.8493 | 0.6002 | 0.7034 | 0.3688 | | 3 | 1.1885 | 1.3096 | 0.8034 | 0.8020 | 0.6541 | 0.7205 | 0.3963 | #### Training Summary: - **Total Training Steps**: 2958 - **Training Loss**: 1.3862 - **Training Time**: 3058.7 seconds (~51 minutes) - **Training Speed**: 15.47 samples per second - **Steps Per Second**: 0.967 ## How to Use the Model ```python from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("mahdin70/CodeBERT-VulnCWE", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("microsoft/codebert-base") code_snippet = "int main() { int arr[10]; arr[11] = 5; return 0; }" inputs = tokenizer(code_snippet, return_tensors="pt") outputs = model(**inputs) vul_logits = outputs["vul_logits"] cwe_logits = outputs["cwe_logits"] vul_pred = vul_logits.argmax(dim=1).item() cwe_pred = kov_logits.argmax(dim=1).item() print(f"Vulnerability: {'Vulnerable' if vul_pred == 1 else 'Non-vulnerable'}") print(f"CWE ID: {cwe_pred if vul_pred == 1 else 'N/A'}") ``` ## Limitations and Future Improvements - The model achieves a CWE classification accuracy of 39.63% on the validation set, indicating significant room for improvement. Advanced architectures, better data balancing, or additional pretraining could enhance performance. - The model's vulnerability detection F1-score (72.05% on validation) is moderate but could be improved with further tuning or a larger dataset. - The model may struggle with edge cases or CWEs not well-represented in the training data. - Test set evaluation metrics are pending. Running the model on the test set will provide a clearer picture of its generalization. ## Notes - Ensure the `trust_remote_code=True` flag is used when loading the model, as it relies on custom code for the `MultiTaskCodeBERT` architecture. - The model expects input code snippets tokenized using the CodeBERT tokenizer (`microsoft/codebert-base`). - For best results, preprocess code snippets consistently with the training dataset (e.g., max length of 512 tokens).
Paul27/CyberXpert-mistral-nemo-1.3
Paul27
2025-04-22T11:53:00Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit", "base_model:finetune:unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-22T11:52:45Z
--- base_model: unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Paul27 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
jimothy43/llama-qlora-task1
jimothy43
2025-04-22T11:51:29Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.2-11B-Vision-Instruct", "base_model:adapter:meta-llama/Llama-3.2-11B-Vision-Instruct", "region:us" ]
null
2025-04-22T11:49:10Z
--- base_model: meta-llama/Llama-3.2-11B-Vision-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
nitishwagmi/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-voracious_fierce_hawk
nitishwagmi
2025-04-22T11:50:13Z
5
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am voracious fierce hawk", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-14T16:26:46Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-voracious_fierce_hawk tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am voracious fierce hawk - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-voracious_fierce_hawk This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="nitishwagmi/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-voracious_fierce_hawk", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
xw17/Llama-3.2-3B-Instruct_finetuned__optimized_lora_globem_origin
xw17
2025-04-22T11:49:52Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-22T11:49:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
prepro1/Llama-3.2-3B-Instruct_LORA_3d_r2l
prepro1
2025-04-22T11:49:50Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-04-22T11:48:49Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** prepro1 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ChitBrahmbhatt/TTS
ChitBrahmbhatt
2025-04-22T11:47:39Z
0
0
null
[ "en", "hi", "dataset:YeBhoneLin10/openai-whisper-SLR", "base_model:openai/whisper-large-v2", "base_model:finetune:openai/whisper-large-v2", "license:apache-2.0", "region:us" ]
null
2025-04-22T11:45:50Z
--- license: apache-2.0 datasets: - YeBhoneLin10/openai-whisper-SLR language: - en - hi base_model: - openai/whisper-large-v2 ---
LLM-EDA/VeriPrefer-Qwen2.5-Coder-7B
LLM-EDA
2025-04-22T11:47:38Z
2
0
null
[ "safetensors", "qwen2", "en", "dataset:LLM-EDA/pyra_tb", "base_model:Qwen/Qwen2.5-Coder-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-04-21T12:26:10Z
--- license: apache-2.0 datasets: - LLM-EDA/pyra_tb language: - en metrics: - code_eval base_model: - Qwen/Qwen2.5-Coder-7B-Instruct --- Check https://github.com/CatIIIIIIII/VeriPrefer for usage.
LLM-EDA/VeriPrefer-CodeQwen1.5-7B
LLM-EDA
2025-04-22T11:46:54Z
2
0
null
[ "safetensors", "qwen2", "en", "dataset:LLM-EDA/pyra_tb", "base_model:Qwen/CodeQwen1.5-7B-Chat", "base_model:finetune:Qwen/CodeQwen1.5-7B-Chat", "license:apache-2.0", "region:us" ]
null
2025-04-21T12:47:39Z
--- license: apache-2.0 datasets: - LLM-EDA/pyra_tb language: - en metrics: - code_eval base_model: - Qwen/CodeQwen1.5-7B-Chat --- Check https://github.com/CatIIIIIIII/VeriPrefer for usage.
gradientrouting-spar/toy_goodharting_gemma-2-2b-it_fruits_vegetables_d_outcome_only_0_5_MC
gradientrouting-spar
2025-04-22T11:46:47Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-22T11:46:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LLM-EDA/VeriPrefer-deepseek-coder-7b-v1.5
LLM-EDA
2025-04-22T11:46:05Z
0
0
null
[ "safetensors", "llama", "en", "dataset:LLM-EDA/pyra_tb", "base_model:deepseek-ai/deepseek-coder-7b-instruct-v1.5", "base_model:finetune:deepseek-ai/deepseek-coder-7b-instruct-v1.5", "license:apache-2.0", "region:us" ]
null
2025-04-21T12:58:44Z
--- license: apache-2.0 datasets: - LLM-EDA/pyra_tb language: - en metrics: - code_eval base_model: - deepseek-ai/deepseek-coder-7b-instruct-v1.5 --- Check https://github.com/CatIIIIIIII/VeriPrefer for usage.
Hartunka/distilbert_km_20_v2_mrpc
Hartunka
2025-04-22T11:45:42Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:Hartunka/distilbert_km_20_v2", "base_model:finetune:Hartunka/distilbert_km_20_v2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-22T11:44:31Z
--- library_name: transformers language: - en base_model: Hartunka/distilbert_km_20_v2 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: distilbert_km_20_v2_mrpc results: - task: name: Text Classification type: text-classification dataset: name: GLUE MRPC type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.6813725490196079 - name: F1 type: f1 value: 0.8 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_km_20_v2_mrpc This model is a fine-tuned version of [Hartunka/distilbert_km_20_v2](https://huggingface.co/Hartunka/distilbert_km_20_v2) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.6079 - Accuracy: 0.6814 - F1: 0.8 - Combined Score: 0.7407 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.6244 | 1.0 | 15 | 0.6130 | 0.7034 | 0.8136 | 0.7585 | | 0.5689 | 2.0 | 30 | 0.6079 | 0.6814 | 0.8 | 0.7407 | | 0.5055 | 3.0 | 45 | 0.6477 | 0.7108 | 0.8168 | 0.7638 | | 0.4324 | 4.0 | 60 | 0.6750 | 0.6691 | 0.7660 | 0.7176 | | 0.3194 | 5.0 | 75 | 0.7787 | 0.6667 | 0.7631 | 0.7149 | | 0.1929 | 6.0 | 90 | 1.0004 | 0.6691 | 0.7576 | 0.7134 | | 0.1087 | 7.0 | 105 | 1.2846 | 0.6275 | 0.7206 | 0.6740 | ### Framework versions - Transformers 4.50.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.21.1
Hastagaras/run-3-8b-test
Hastagaras
2025-04-22T11:43:28Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T11:18:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Jollyfish/wlgv3t-new-fold4-26-3-11
Jollyfish
2025-04-22T11:43:11Z
0
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-04-22T11:27:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hujianing/marian-finetuned-kde4-en-to-fr
hujianing
2025-04-22T11:39:45Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "translation", "generated_from_trainer", "base_model:Helsinki-NLP/opus-mt-en-fr", "base_model:finetune:Helsinki-NLP/opus-mt-en-fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2025-04-22T09:27:46Z
--- library_name: transformers license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-fr tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8554 - Model Preparation Time: 0.0145 - Bleu: 32.6656 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Tokenizers 0.21.1
MinHyeong/dolly-v2-7b_lora_focal_8_16_qkv_pyt
MinHyeong
2025-04-22T11:39:33Z
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T11:35:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
grimjim/MagSoup-v1-12B
grimjim
2025-04-22T11:39:31Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2403.19522", "base_model:grimjim/MagnaRei-v2-12B", "base_model:merge:grimjim/MagnaRei-v2-12B", "base_model:grimjim/Magnolia-v3-12B", "base_model:merge:grimjim/Magnolia-v3-12B", "base_model:inflatebot/MN-12B-Mag-Mell-R1", "base_model:merge:inflatebot/MN-12B-Mag-Mell-R1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T11:34:34Z
--- base_model: - grimjim/MagnaRei-v2-12B - grimjim/Magnolia-v3-12B - inflatebot/MN-12B-Mag-Mell-R1 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [grimjim/Magnolia-v3-12B](https://huggingface.co/grimjim/Magnolia-v3-12B) as a base. ### Models Merged The following models were included in the merge: * [grimjim/MagnaRei-v2-12B](https://huggingface.co/grimjim/MagnaRei-v2-12B) * [inflatebot/MN-12B-Mag-Mell-R1](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: inflatebot/MN-12B-Mag-Mell-R1 - model: grimjim/MagnaRei-v2-12B merge_method: model_stock base_model: grimjim/Magnolia-v3-12B normalize: false int8_mask: true dtype: bfloat16 ```
jaeyoungk/dpo-lora-adapter-epoch4
jaeyoungk
2025-04-22T11:38:19Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-21T05:58:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
HYUKJUNCHOI/0422_llama_2ep_5e-5_aug
HYUKJUNCHOI
2025-04-22T11:34:33Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mllama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-22T11:34:11Z
--- base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mllama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** HYUKJUNCHOI - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
TareksTesting/Alkahest-V10.1-LLaMa-70B
TareksTesting
2025-04-22T11:34:15Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2311.03099", "base_model:TareksLab/Alkahest-Sub-LLaMa-70B", "base_model:merge:TareksLab/Alkahest-Sub-LLaMa-70B", "base_model:TareksTesting/Alkahest-V8-LLaMa-70B", "base_model:merge:TareksTesting/Alkahest-V8-LLaMa-70B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T11:06:29Z
--- base_model: - TareksLab/Alkahest-Sub-LLaMa-70B - TareksTesting/Alkahest-V8-LLaMa-70B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [TareksTesting/Alkahest-V8-LLaMa-70B](https://huggingface.co/TareksTesting/Alkahest-V8-LLaMa-70B) as a base. ### Models Merged The following models were included in the merge: * [TareksLab/Alkahest-Sub-LLaMa-70B](https://huggingface.co/TareksLab/Alkahest-Sub-LLaMa-70B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: TareksLab/Alkahest-Sub-LLaMa-70B parameters: weight: 1e-4 density: 1 lambda: 1 - model: TareksTesting/Alkahest-V8-LLaMa-70B parameters: weight: 1 density: 1 lambda: 1 base_model: TareksTesting/Alkahest-V8-LLaMa-70B merge_method: dare_ties parameters: normalize: false int8_mask: true tokenizer: source: base chat_template: llama3 dtype: bfloat16 name: alkahest.ex ```
heziyevv/cosyvoice2-0.5b-15epochs-elise-data
heziyevv
2025-04-22T11:34:11Z
0
0
null
[ "onnx", "safetensors", "arxiv:2412.10117", "region:us" ]
null
2025-04-22T11:31:19Z
[![SVG Banners](https://svg-banners.vercel.app/api?type=origin&text1=CosyVoice🤠&text2=Text-to-Speech%20💖%20Large%20Language%20Model&width=800&height=210)](https://github.com/Akshay090/svg-banners) ## 👉🏻 CosyVoice 👈🏻 **CosyVoice 2.0**: [Demos](https://funaudiollm.github.io/cosyvoice2/); [Paper](https://arxiv.org/abs/2412.10117); [Modelscope](https://www.modelscope.cn/studios/iic/CosyVoice2-0.5B); [HuggingFace](https://huggingface.co/spaces/FunAudioLLM/CosyVoice2-0.5B) **CosyVoice 1.0**: [Demos](https://fun-audio-llm.github.io); [Paper](https://funaudiollm.github.io/pdf/CosyVoice_v1.pdf); [Modelscope](https://www.modelscope.cn/studios/iic/CosyVoice-300M) ## Highlight🔥 **CosyVoice 2.0** has been released! Compared to version 1.0, the new version offers more accurate, more stable, faster, and better speech generation capabilities. ### Multilingual - **Supported Language**: Chinese, English, Japanese, Korean, Chinese dialects (Cantonese, Sichuanese, Shanghainese, Tianjinese, Wuhanese, etc.) - **Crosslingual & Mixlingual**:Support zero-shot voice cloning for cross-lingual and code-switching scenarios. ### Ultra-Low Latency - **Bidirectional Streaming Support**: CosyVoice 2.0 integrates offline and streaming modeling technologies. - **Rapid First Packet Synthesis**: Achieves latency as low as 150ms while maintaining high-quality audio output. ### High Accuracy - **Improved Pronunciation**: Reduces pronunciation errors by 30% to 50% compared to CosyVoice 1.0. - **Benchmark Achievements**: Attains the lowest character error rate on the hard test set of the Seed-TTS evaluation set. ### Strong Stability - **Consistency in Timbre**: Ensures reliable voice consistency for zero-shot and cross-language speech synthesis. - **Cross-language Synthesis**: Marked improvements compared to version 1.0. ### Natural Experience - **Enhanced Prosody and Sound Quality**: Improved alignment of synthesized audio, raising MOS evaluation scores from 5.4 to 5.53. - **Emotional and Dialectal Flexibility**: Now supports more granular emotional controls and accent adjustments. ## Roadmap - [x] 2024/12 - [x] 25hz cosyvoice 2.0 released - [x] 2024/09 - [x] 25hz cosyvoice base model - [x] 25hz cosyvoice voice conversion model - [x] 2024/08 - [x] Repetition Aware Sampling(RAS) inference for llm stability - [x] Streaming inference mode support, including kv cache and sdpa for rtf optimization - [x] 2024/07 - [x] Flow matching training support - [x] WeTextProcessing support when ttsfrd is not available - [x] Fastapi server and client ## Install **Clone and install** - Clone the repo ``` sh git clone --recursive https://github.com/FunAudioLLM/CosyVoice.git # If you failed to clone submodule due to network failures, please run following command until success cd CosyVoice git submodule update --init --recursive ``` - Install Conda: please see https://docs.conda.io/en/latest/miniconda.html - Create Conda env: ``` sh conda create -n cosyvoice python=3.10 conda activate cosyvoice # pynini is required by WeTextProcessing, use conda to install it as it can be executed on all platform. conda install -y -c conda-forge pynini==2.1.5 pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com # If you encounter sox compatibility issues # ubuntu sudo apt-get install sox libsox-dev # centos sudo yum install sox sox-devel ``` **Model download** We strongly recommend that you download our pretrained `CosyVoice2-0.5B` `CosyVoice-300M` `CosyVoice-300M-SFT` `CosyVoice-300M-Instruct` model and `CosyVoice-ttsfrd` resource. ``` python # SDK模型下载 from modelscope import snapshot_download snapshot_download('iic/CosyVoice2-0.5B', local_dir='pretrained_models/CosyVoice2-0.5B') snapshot_download('iic/CosyVoice-300M', local_dir='pretrained_models/CosyVoice-300M') snapshot_download('iic/CosyVoice-300M-25Hz', local_dir='pretrained_models/CosyVoice-300M-25Hz') snapshot_download('iic/CosyVoice-300M-SFT', local_dir='pretrained_models/CosyVoice-300M-SFT') snapshot_download('iic/CosyVoice-300M-Instruct', local_dir='pretrained_models/CosyVoice-300M-Instruct') snapshot_download('iic/CosyVoice-ttsfrd', local_dir='pretrained_models/CosyVoice-ttsfrd') ``` ``` sh # git模型下载,请确保已安装git lfs mkdir -p pretrained_models git clone https://www.modelscope.cn/iic/CosyVoice2-0.5B.git pretrained_models/CosyVoice2-0.5B git clone https://www.modelscope.cn/iic/CosyVoice-300M.git pretrained_models/CosyVoice-300M git clone https://www.modelscope.cn/iic/CosyVoice-300M-25Hz.git pretrained_models/CosyVoice-300M-25Hz git clone https://www.modelscope.cn/iic/CosyVoice-300M-SFT.git pretrained_models/CosyVoice-300M-SFT git clone https://www.modelscope.cn/iic/CosyVoice-300M-Instruct.git pretrained_models/CosyVoice-300M-Instruct git clone https://www.modelscope.cn/iic/CosyVoice-ttsfrd.git pretrained_models/CosyVoice-ttsfrd ``` Optionally, you can unzip `ttsfrd` resouce and install `ttsfrd` package for better text normalization performance. Notice that this step is not necessary. If you do not install `ttsfrd` package, we will use WeTextProcessing by default. ``` sh cd pretrained_models/CosyVoice-ttsfrd/ unzip resource.zip -d . pip install ttsfrd_dependency-0.1-py3-none-any.whl pip install ttsfrd-0.4.2-cp310-cp310-linux_x86_64.whl ``` **Basic Usage** We strongly recommend using `CosyVoice2-0.5B` for better performance. Follow code below for detailed usage of each model. ``` python import sys sys.path.append('third_party/Matcha-TTS') from cosyvoice.cli.cosyvoice import CosyVoice, CosyVoice2 from cosyvoice.utils.file_utils import load_wav import torchaudio ``` **CosyVoice2 Usage** ```python cosyvoice = CosyVoice2('pretrained_models/CosyVoice2-0.5B', load_jit=False, load_trt=False, fp16=False) # NOTE if you want to reproduce the results on https://funaudiollm.github.io/cosyvoice2, please add text_frontend=False during inference # zero_shot usage prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000) for i, j in enumerate(cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)): torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate) # fine grained control, for supported control, check cosyvoice/tokenizer/tokenizer.py#L248 for i, j in enumerate(cosyvoice.inference_cross_lingual('在他讲述那个荒诞故事的过程中,他突然[laughter]停下来,因为他自己也被逗笑了[laughter]。', prompt_speech_16k, stream=False)): torchaudio.save('fine_grained_control_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate) # instruct usage for i, j in enumerate(cosyvoice.inference_instruct2('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '用四川话说这句话', prompt_speech_16k, stream=False)): torchaudio.save('instruct_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate) ``` **CosyVoice Usage** ```python cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-SFT', load_jit=False, load_trt=False, fp16=False) # sft usage print(cosyvoice.list_available_spks()) # change stream=True for chunk stream inference for i, j in enumerate(cosyvoice.inference_sft('你好,我是通义生成式语音大模型,请问有什么可以帮您的吗?', '中文女', stream=False)): torchaudio.save('sft_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate) cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M') # or change to pretrained_models/CosyVoice-300M-25Hz for 25Hz inference # zero_shot usage, <|zh|><|en|><|jp|><|yue|><|ko|> for Chinese/English/Japanese/Cantonese/Korean prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000) for i, j in enumerate(cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)): torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate) # cross_lingual usage prompt_speech_16k = load_wav('cross_lingual_prompt.wav', 16000) for i, j in enumerate(cosyvoice.inference_cross_lingual('<|en|>And then later on, fully acquiring that company. So keeping management in line, interest in line with the asset that\'s coming into the family is a reason why sometimes we don\'t buy the whole thing.', prompt_speech_16k, stream=False)): torchaudio.save('cross_lingual_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate) # vc usage prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000) source_speech_16k = load_wav('cross_lingual_prompt.wav', 16000) for i, j in enumerate(cosyvoice.inference_vc(source_speech_16k, prompt_speech_16k, stream=False)): torchaudio.save('vc_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate) cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-Instruct') # instruct usage, support <laughter></laughter><strong></strong>[laughter][breath] for i, j in enumerate(cosyvoice.inference_instruct('在面对挑战时,他展现了非凡的<strong>勇气</strong>与<strong>智慧</strong>。', '中文男', 'Theo \'Crimson\', is a fiery, passionate rebel leader. Fights with fervor for justice, but struggles with impulsiveness.', stream=False)): torchaudio.save('instruct_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate) ``` **Start web demo** You can use our web demo page to get familiar with CosyVoice quickly. Please see the demo website for details. ``` python # change iic/CosyVoice-300M-SFT for sft inference, or iic/CosyVoice-300M-Instruct for instruct inference python3 webui.py --port 50000 --model_dir pretrained_models/CosyVoice-300M ``` **Advanced Usage** For advanced user, we have provided train and inference scripts in `examples/libritts/cosyvoice/run.sh`. **Build for deployment** Optionally, if you want service deployment, you can run following steps. ``` sh cd runtime/python docker build -t cosyvoice:v1.0 . # change iic/CosyVoice-300M to iic/CosyVoice-300M-Instruct if you want to use instruct inference # for grpc usage docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/grpc && python3 server.py --port 50000 --max_conc 4 --model_dir iic/CosyVoice-300M && sleep infinity" cd grpc && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct> # for fastapi usage docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/fastapi && python3 server.py --port 50000 --model_dir iic/CosyVoice-300M && sleep infinity" cd fastapi && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct> ``` ## Discussion & Communication You can directly discuss on [Github Issues](https://github.com/FunAudioLLM/CosyVoice/issues). You can also scan the QR code to join our official Dingding chat group. <img src="./asset/dingding.png" width="250px"> ## Acknowledge 1. We borrowed a lot of code from [FunASR](https://github.com/modelscope/FunASR). 2. We borrowed a lot of code from [FunCodec](https://github.com/modelscope/FunCodec). 3. We borrowed a lot of code from [Matcha-TTS](https://github.com/shivammehta25/Matcha-TTS). 4. We borrowed a lot of code from [AcademiCodec](https://github.com/yangdongchao/AcademiCodec). 5. We borrowed a lot of code from [WeNet](https://github.com/wenet-e2e/wenet). ## Disclaimer The content provided above is for academic purposes only and is intended to demonstrate technical capabilities. Some examples are sourced from the internet. If any content infringes on your rights, please contact us to request its removal.
Draq10/deepseekr1-POMI-16bitGGUF-HF
Draq10
2025-04-22T11:34:04Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/DeepSeek-R1-Distill-Llama-8B", "base_model:quantized:unsloth/DeepSeek-R1-Distill-Llama-8B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-22T11:33:47Z
--- base_model: unsloth/DeepSeek-R1-Distill-Llama-8B tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Draq10 - **License:** apache-2.0 - **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-8B This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
BoSchrodt29931/xfhbfghfn
BoSchrodt29931
2025-04-22T11:33:41Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2025-04-22T11:33:40Z
--- license: bigscience-bloom-rail-1.0 ---
xw17/Qwen2-1.5B-Instruct_finetuned__optimized_lora_globem_origin
xw17
2025-04-22T11:32:43Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-22T11:32:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ASethi04/Qwen-Qwen2.5-7B-opc-sft-second-lora
ASethi04
2025-04-22T11:31:20Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-7B", "base_model:finetune:Qwen/Qwen2.5-7B", "endpoints_compatible", "region:us" ]
null
2025-04-22T10:24:28Z
--- base_model: Qwen/Qwen2.5-7B library_name: transformers model_name: Qwen-Qwen2.5-7B-opc-sft-second-lora tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for Qwen-Qwen2.5-7B-opc-sft-second-lora This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ASethi04/Qwen-Qwen2.5-7B-opc-sft-second-lora", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/m494kwqk) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ASethi04/google-gemma-2-9b-legalbench-second-lora
ASethi04
2025-04-22T11:30:59Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-2-9b", "base_model:finetune:google/gemma-2-9b", "endpoints_compatible", "region:us" ]
null
2025-04-22T10:25:44Z
--- base_model: google/gemma-2-9b library_name: transformers model_name: google-gemma-2-9b-legalbench-second-lora tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for google-gemma-2-9b-legalbench-second-lora This model is a fine-tuned version of [google/gemma-2-9b](https://huggingface.co/google/gemma-2-9b). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ASethi04/google-gemma-2-9b-legalbench-second-lora", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/rifvvzv7) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Hartunka/tiny_bert_rand_100_v2_qnli
Hartunka
2025-04-22T11:30:40Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:Hartunka/tiny_bert_rand_100_v2", "base_model:finetune:Hartunka/tiny_bert_rand_100_v2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-22T11:24:37Z
--- library_name: transformers language: - en base_model: Hartunka/tiny_bert_rand_100_v2 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: tiny_bert_rand_100_v2_qnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE QNLI type: glue args: qnli metrics: - name: Accuracy type: accuracy value: 0.6168771737140765 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny_bert_rand_100_v2_qnli This model is a fine-tuned version of [Hartunka/tiny_bert_rand_100_v2](https://huggingface.co/Hartunka/tiny_bert_rand_100_v2) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6495 - Accuracy: 0.6169 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.666 | 1.0 | 410 | 0.6495 | 0.6169 | | 0.6355 | 2.0 | 820 | 0.6538 | 0.6247 | | 0.5919 | 3.0 | 1230 | 0.6659 | 0.6160 | | 0.531 | 4.0 | 1640 | 0.7203 | 0.6207 | | 0.4603 | 5.0 | 2050 | 0.7937 | 0.6132 | | 0.3924 | 6.0 | 2460 | 0.9373 | 0.6068 | ### Framework versions - Transformers 4.50.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.21.1
01PrathamS/mistral-v2_finetune_unsloth_train_final_dataset_merged
01PrathamS
2025-04-22T11:30:12Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/mistral-7b-v0.2-bnb-4bit", "base_model:quantized:unsloth/mistral-7b-v0.2-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-04-22T08:10:07Z
--- base_model: unsloth/mistral-7b-v0.2-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** 01PrathamS - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Eliiff/Fabric_Defects_Qwen_FT
Eliiff
2025-04-22T11:27:00Z
0
0
null
[ "license:other", "region:us" ]
null
2025-04-22T11:26:58Z
--- license: other license_name: other license_link: LICENSE ---
ASethi04/google-gemma-2-9b-pubmedqa-lora-first
ASethi04
2025-04-22T11:26:13Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-2-9b", "base_model:finetune:google/gemma-2-9b", "endpoints_compatible", "region:us" ]
null
2025-04-22T09:25:32Z
--- base_model: google/gemma-2-9b library_name: transformers model_name: google-gemma-2-9b-pubmedqa-lora-first tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for google-gemma-2-9b-pubmedqa-lora-first This model is a fine-tuned version of [google/gemma-2-9b](https://huggingface.co/google/gemma-2-9b). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ASethi04/google-gemma-2-9b-pubmedqa-lora-first", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/azqbedde) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MKR-AI/clip-finetuned-face-liveness
MKR-AI
2025-04-22T11:25:15Z
58
0
transformers
[ "transformers", "safetensors", "clip", "image-classification", "generated_from_trainer", "base_model:openai/clip-vit-large-patch14", "base_model:finetune:openai/clip-vit-large-patch14", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-04-21T07:43:44Z
--- library_name: transformers base_model: openai/clip-vit-large-patch14 tags: - generated_from_trainer model-index: - name: clip-finetuned-face-liveness results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clip-finetuned-face-liveness This model is a fine-tuned version of [openai/clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0157 - eval_accuracy: 0.9968 - eval_precision: 0.9943 - eval_recall: 1.0 - eval_f1: 0.9971 - eval_roc_auc: 1.0 - eval_runtime: 255.4133 - eval_samples_per_second: 8.613 - eval_steps_per_second: 0.54 - epoch: 6.0 - step: 414 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.51.1 - Pytorch 2.5.1+cu124 - Datasets 3.5.0 - Tokenizers 0.21.0
Romain-XV/172b8f01-d443-4562-996c-fad9129e97b3
Romain-XV
2025-04-22T11:24:42Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:DeepMount00/Llama-3-8b-Ita", "base_model:finetune:DeepMount00/Llama-3-8b-Ita", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T09:29:39Z
--- base_model: DeepMount00/Llama-3-8b-Ita library_name: transformers model_name: 172b8f01-d443-4562-996c-fad9129e97b3 tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for 172b8f01-d443-4562-996c-fad9129e97b3 This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Romain-XV/172b8f01-d443-4562-996c-fad9129e97b3", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/romain_fnc-xventures/Gradients-On-Demand/runs/u3vi0cxu) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
BLACK-ZERO/spritual_ai
BLACK-ZERO
2025-04-22T11:21:29Z
0
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-19T09:24:06Z
--- library_name: transformers pipeline_tag: text-generation base_model: - meta-llama/Llama-3.1-8B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PengZhang424242/whisper-tiny-ONNX
PengZhang424242
2025-04-22T11:21:29Z
0
0
transformers.js
[ "transformers.js", "onnx", "whisper", "automatic-speech-recognition", "base_model:openai/whisper-tiny", "base_model:quantized:openai/whisper-tiny", "region:us" ]
automatic-speech-recognition
2025-04-22T11:21:05Z
--- library_name: transformers.js base_model: - openai/whisper-tiny --- # whisper-tiny (ONNX) This is an ONNX version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
xw17/Phi-3-mini-4k-instruct_finetuned__optimized_lora_globem_origin
xw17
2025-04-22T11:19:36Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-22T11:19:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kathiatownsendalvaradoh28q/dfbfg
kathiatownsendalvaradoh28q
2025-04-22T11:18:14Z
0
0
null
[ "license:bsd-3-clause", "region:us" ]
null
2025-04-22T11:18:06Z
--- license: bsd-3-clause ---
liangjiang003/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fishy_jagged_cassowary
liangjiang003
2025-04-22T11:17:18Z
4
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am fishy jagged cassowary", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-21T16:51:56Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fishy_jagged_cassowary tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am fishy jagged cassowary - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fishy_jagged_cassowary This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="liangjiang003/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fishy_jagged_cassowary", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
momentum99/sd-control-lora-v3-color_sketch_prompt-half_skip_attn-rank64-conv_in-rank64
momentum99
2025-04-22T11:16:45Z
1
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "controlnet", "control-lora-v3", "diffusers-training", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-04-21T13:50:19Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: creativeml-openrail-m inference: true tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - controlnet - control-lora-v3 - diffusers-training - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - controlnet - control-lora-v3 - diffusers-training --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # sdxl-control-lora-v3-momentum99/sd-control-lora-v3-color_sketch_prompt-half_skip_attn-rank64-conv_in-rank64 These are control-lora-v3 weights trained on stabilityai/stable-diffusion-xl-base-1.0 with new type of conditioning. You can find some example images below. prompt: white background, a detailed high-quality colorful image, solo, 1girl, long hair, looking at viewer, standing, simple background, full body ![images_0)](./images_0.png) prompt: white background, a detailed high-quality colorful image, 1girl, solo, full body, shoes, long sleeves, looking at viewer, simple background ![images_1)](./images_1.png) prompt: white background, a detailed high-quality colorful image, 1girl, solo, skirt, simple background, shoes, smile ![images_2)](./images_2.png) prompt: white background, a detailed high-quality colorful image, 1girl, solo, blue eyes, full body, shirt, standing, looking at viewer ![images_3)](./images_3.png) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
krtk00/custom_id_lora
krtk00
2025-04-22T11:14:08Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-04-22T11:14:06Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: CUSTID --- # Custom_Id_Lora <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `CUSTID` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "CUSTID", "lora_weights": "https://huggingface.co/krtk00/custom_id_lora/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('krtk00/custom_id_lora', weight_name='lora.safetensors') image = pipeline('CUSTID').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1500 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/krtk00/custom_id_lora/discussions) to add images that show off what you’ve made with this LoRA.
francsharma/doordarshan
francsharma
2025-04-22T11:14:04Z
0
0
diffusers
[ "diffusers", "flux", "text-to-image", "lora", "fal", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-04-22T11:13:55Z
--- tags: - flux - text-to-image - lora - diffusers - fal base_model: black-forest-labs/FLUX.1-dev instance_prompt: license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # doordarshan <Gallery /> ## Model description ## Trigger words You should use `` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/francsharma/doordarshan/tree/main) them in the Files & versions tab. ## Training at fal.ai Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
chihhsichen/Llama-3.1-8B-MATH
chihhsichen
2025-04-22T11:12:44Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-22T11:11:45Z
--- base_model: unsloth/meta-llama-3.1-8b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** chihhsichen - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mlfoundations-dev/meta_chat_reasoning_0_100
mlfoundations-dev
2025-04-22T11:11:49Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T03:56:28Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: meta_chat_reasoning_0_100 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # meta_chat_reasoning_0_100 This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/meta_chat_reasoning_0_100 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - total_eval_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.5.1 - Datasets 3.0.2 - Tokenizers 0.20.3
TrumpMcDonaldz/bert-43-multilabel-emotion-detection-ONNX
TrumpMcDonaldz
2025-04-22T11:11:14Z
0
0
transformers.js
[ "transformers.js", "onnx", "bert", "text-classification", "base_model:borisn70/bert-43-multilabel-emotion-detection", "base_model:quantized:borisn70/bert-43-multilabel-emotion-detection", "region:us" ]
text-classification
2025-04-22T11:10:54Z
--- library_name: transformers.js base_model: - borisn70/bert-43-multilabel-emotion-detection --- # bert-43-multilabel-emotion-detection (ONNX) This is an ONNX version of [borisn70/bert-43-multilabel-emotion-detection](https://huggingface.co/borisn70/bert-43-multilabel-emotion-detection). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
Jeevan0405/bert-website-intrusion-detection
Jeevan0405
2025-04-22T11:04:51Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-22T11:04:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ASethi04/Qwen-Qwen2.5-7B-pubmedqa-lora-first
ASethi04
2025-04-22T11:04:44Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-7B", "base_model:finetune:Qwen/Qwen2.5-7B", "endpoints_compatible", "region:us" ]
null
2025-04-22T09:25:03Z
--- base_model: Qwen/Qwen2.5-7B library_name: transformers model_name: Qwen-Qwen2.5-7B-pubmedqa-lora-first tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for Qwen-Qwen2.5-7B-pubmedqa-lora-first This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ASethi04/Qwen-Qwen2.5-7B-pubmedqa-lora-first", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/n0y2se80) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
hubii-world/ECG2HRV
hubii-world
2025-04-22T11:02:34Z
0
0
null
[ "joblib", "biology", "electrocardiogram", "feature-extraction", "en", "region:us" ]
feature-extraction
2024-01-24T05:56:28Z
--- language: - en pipeline_tag: feature-extraction tags: - biology - electrocardiogram --- # ECG2HRV Pipeline for the processing of heart rate data from raw ECG signals towards HRV features. For more details see [HUBII](https://hubii.world/) ## How to use For importing the model in your project, you can use the following code: ```python # Imports from huggingface_hub import hf_hub_download import joblib # Define parameters REPO_ID = "hubii-world/ECG2HRV" FILENAME = "ECG2HRV.joblib" # Load the model model = joblib.load( hf_hub_download(repo_id=REPO_ID, filename=FILENAME) ) ``` Example usage of the model: ```python # ecg should be a 1D numpy array with the ECG signal hrv_features = model(input_data=ecg, frequency=100.0) # returns hrv_features in a dictionary with the feature names as keys ```
CSLin3303/qwen2.5-data01
CSLin3303
2025-04-22T11:02:15Z
0
0
transformers
[ "transformers", "gguf", "qwen2", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-22T11:01:04Z
--- base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** CSLin3303 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
kondababu18/kondababu18
kondababu18
2025-04-22T11:01:30Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2_5_vl", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-22T11:01:19Z
--- base_model: unsloth/qwen2.5-vl-7b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2_5_vl - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** kondababu18 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-vl-7b-instruct-unsloth-bnb-4bit This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
borisPMC/MedicGrabber_WhisperLargeTurbo
borisPMC
2025-04-22T11:01:14Z
7
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-large-v3-turbo", "base_model:finetune:openai/whisper-large-v3-turbo", "license:mit", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-04-21T08:25:26Z
--- library_name: transformers license: mit base_model: openai/whisper-large-v3-turbo tags: - generated_from_trainer model-index: - name: MedicGrabber_WhisperLargeTurbo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MedicGrabber_WhisperLargeTurbo This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4672 - Wer Ortho: 14.0594 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | |:-------------:|:-----:|:----:|:---------------:|:---------:| | No log | 0 | 0 | 2.7709 | 32.4752 | | 2.0035 | 1.0 | 29 | 0.5901 | 20.5941 | | 0.2958 | 2.0 | 58 | 0.5169 | 18.0198 | | 0.1159 | 3.0 | 87 | 0.4895 | 17.6238 | | 0.0708 | 4.0 | 116 | 0.4868 | 15.2475 | | 0.0234 | 5.0 | 145 | 0.4672 | 14.0594 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.5.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
loraug/taxollama
loraug
2025-04-22T11:00:36Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-2-7b-bnb-4bit", "base_model:finetune:unsloth/llama-2-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-22T11:00:30Z
--- base_model: unsloth/llama-2-7b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** loraug - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-2-7b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
deswaq/juh64
deswaq
2025-04-22T10:59:25Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T10:56:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
genki10/BERT_V8_sp20_lw10_ex50_lo00_k3_k3_fold0
genki10
2025-04-22T10:58:06Z
0
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-22T05:38:46Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: BERT_V8_sp20_lw10_ex50_lo00_k3_k3_fold0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERT_V8_sp20_lw10_ex50_lo00_k3_k3_fold0 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1040 - Qwk: 0.2457 - Mse: 1.1040 - Rmse: 1.0507 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | No log | 1.0 | 3 | 8.0893 | 0.0 | 8.0893 | 2.8442 | | No log | 2.0 | 6 | 6.7889 | 0.0 | 6.7889 | 2.6056 | | No log | 3.0 | 9 | 5.5036 | 0.0112 | 5.5036 | 2.3460 | | No log | 4.0 | 12 | 4.2704 | 0.0039 | 4.2704 | 2.0665 | | No log | 5.0 | 15 | 3.1360 | 0.0 | 3.1360 | 1.7709 | | No log | 6.0 | 18 | 2.1775 | 0.0921 | 2.1775 | 1.4756 | | No log | 7.0 | 21 | 1.7157 | 0.0409 | 1.7157 | 1.3099 | | No log | 8.0 | 24 | 1.1835 | 0.0316 | 1.1835 | 1.0879 | | No log | 9.0 | 27 | 0.9951 | 0.0316 | 0.9951 | 0.9975 | | No log | 10.0 | 30 | 0.8263 | 0.2628 | 0.8263 | 0.9090 | | No log | 11.0 | 33 | 0.8878 | 0.2139 | 0.8878 | 0.9422 | | No log | 12.0 | 36 | 0.9394 | 0.2465 | 0.9394 | 0.9692 | | No log | 13.0 | 39 | 0.8875 | 0.3263 | 0.8875 | 0.9421 | | No log | 14.0 | 42 | 0.9337 | 0.3266 | 0.9337 | 0.9663 | | No log | 15.0 | 45 | 0.6167 | 0.4566 | 0.6167 | 0.7853 | | No log | 16.0 | 48 | 1.1628 | 0.3048 | 1.1628 | 1.0783 | | No log | 17.0 | 51 | 0.6795 | 0.3624 | 0.6795 | 0.8243 | | No log | 18.0 | 54 | 1.1172 | 0.2676 | 1.1172 | 1.0570 | | No log | 19.0 | 57 | 0.7569 | 0.3502 | 0.7569 | 0.8700 | | No log | 20.0 | 60 | 0.6394 | 0.4428 | 0.6394 | 0.7996 | | No log | 21.0 | 63 | 1.8075 | 0.1829 | 1.8075 | 1.3444 | | No log | 22.0 | 66 | 0.9416 | 0.3199 | 0.9416 | 0.9703 | | No log | 23.0 | 69 | 0.7331 | 0.3876 | 0.7331 | 0.8562 | | No log | 24.0 | 72 | 1.3528 | 0.1939 | 1.3528 | 1.1631 | | No log | 25.0 | 75 | 0.8416 | 0.3332 | 0.8416 | 0.9174 | | No log | 26.0 | 78 | 0.8419 | 0.3640 | 0.8419 | 0.9175 | | No log | 27.0 | 81 | 1.2499 | 0.2435 | 1.2499 | 1.1180 | | No log | 28.0 | 84 | 0.8743 | 0.3420 | 0.8743 | 0.9351 | | No log | 29.0 | 87 | 1.3784 | 0.2244 | 1.3784 | 1.1741 | | No log | 30.0 | 90 | 0.8791 | 0.3507 | 0.8791 | 0.9376 | | No log | 31.0 | 93 | 1.1052 | 0.2770 | 1.1052 | 1.0513 | | No log | 32.0 | 96 | 1.2395 | 0.2417 | 1.2395 | 1.1133 | | No log | 33.0 | 99 | 0.6989 | 0.4356 | 0.6989 | 0.8360 | | No log | 34.0 | 102 | 0.7947 | 0.3619 | 0.7947 | 0.8914 | | No log | 35.0 | 105 | 1.1624 | 0.2600 | 1.1624 | 1.0782 | | No log | 36.0 | 108 | 1.1862 | 0.2501 | 1.1862 | 1.0891 | | No log | 37.0 | 111 | 0.8125 | 0.3578 | 0.8125 | 0.9014 | | No log | 38.0 | 114 | 0.9884 | 0.3095 | 0.9884 | 0.9942 | | No log | 39.0 | 117 | 0.9917 | 0.2988 | 0.9917 | 0.9958 | | No log | 40.0 | 120 | 0.8693 | 0.3337 | 0.8693 | 0.9323 | | No log | 41.0 | 123 | 1.0468 | 0.2826 | 1.0468 | 1.0231 | | No log | 42.0 | 126 | 1.1354 | 0.2570 | 1.1354 | 1.0656 | | No log | 43.0 | 129 | 1.0344 | 0.2852 | 1.0344 | 1.0170 | | No log | 44.0 | 132 | 0.8822 | 0.3322 | 0.8822 | 0.9392 | | No log | 45.0 | 135 | 1.1040 | 0.2457 | 1.1040 | 1.0507 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
Mekuu/LLAMA3.1-8b-Counsel-v1.2
Mekuu
2025-04-22T10:55:39Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T10:50:56Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Mekuu - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
akseljoonas/oR1-Qwen-Coder-7B-Agentic-e2-lr5-b8
akseljoonas
2025-04-22T10:53:49Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "sft", "conversational", "dataset:smolagents/training-traces", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T10:47:55Z
--- base_model: Qwen/Qwen2.5-7B-Instruct datasets: smolagents/training-traces library_name: transformers model_name: oR1-Qwen-Coder-7B-Agentic-e2-lr5-b8 tags: - generated_from_trainer - open-r1 - trl - sft licence: license --- # Model Card for oR1-Qwen-Coder-7B-Agentic-e2-lr5-b8 This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the [smolagents/training-traces](https://huggingface.co/datasets/smolagents/training-traces) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="akseljoonas/oR1-Qwen-Coder-7B-Agentic-e2-lr5-b8", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/akseljoonas-university-of-groningen/huggingface/runs/bxpn0y94) This model was trained with SFT. ### Framework versions - TRL: 0.16.0 - Transformers: 4.50.0 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Yujivus/gemma-3-finetune-CoT-tr-Q8_0-GGUF
Yujivus
2025-04-22T10:52:45Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "gemma3", "llama-cpp", "gguf-my-repo", "en", "base_model:Yujivus/gemma-3-finetune-CoT-tr", "base_model:quantized:Yujivus/gemma-3-finetune-CoT-tr", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-22T10:51:54Z
--- base_model: Yujivus/gemma-3-finetune-CoT-tr language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma3 - llama-cpp - gguf-my-repo --- # Yujivus/gemma-3-finetune-CoT-tr-Q8_0-GGUF This model was converted to GGUF format from [`Yujivus/gemma-3-finetune-CoT-tr`](https://huggingface.co/Yujivus/gemma-3-finetune-CoT-tr) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Yujivus/gemma-3-finetune-CoT-tr) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Yujivus/gemma-3-finetune-CoT-tr-Q8_0-GGUF --hf-file gemma-3-finetune-cot-tr-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Yujivus/gemma-3-finetune-CoT-tr-Q8_0-GGUF --hf-file gemma-3-finetune-cot-tr-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Yujivus/gemma-3-finetune-CoT-tr-Q8_0-GGUF --hf-file gemma-3-finetune-cot-tr-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Yujivus/gemma-3-finetune-CoT-tr-Q8_0-GGUF --hf-file gemma-3-finetune-cot-tr-q8_0.gguf -c 2048 ```
kawausorin/kw_extract_gemma-3-1b-it-unsloth-ft-merged
kawausorin
2025-04-22T10:51:44Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gemma-3-1b-it", "base_model:finetune:unsloth/gemma-3-1b-it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T10:13:28Z
--- base_model: unsloth/gemma-3-1b-it tags: - text-generation-inference - transformers - unsloth - gemma3_text license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** kawausorin - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-1b-it This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
kawausorin/kw_extract_gemma-3-1b-it-unsloth-ft
kawausorin
2025-04-22T10:50:37Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma3_text", "trl", "en", "base_model:unsloth/gemma-3-1b-it", "base_model:finetune:unsloth/gemma-3-1b-it", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-22T10:12:46Z
--- base_model: unsloth/gemma-3-1b-it tags: - text-generation-inference - transformers - unsloth - gemma3_text - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** kawausorin - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-1b-it This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RosFiliber740/ghfghfgh
RosFiliber740
2025-04-22T10:50:20Z
0
0
null
[ "license:cc-by-nc-2.0", "region:us" ]
null
2025-04-22T10:50:20Z
--- license: cc-by-nc-2.0 ---
deswaq/juh63
deswaq
2025-04-22T10:50:05Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T10:46:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Hartunka/tiny_bert_rand_50_v2_stsb
Hartunka
2025-04-22T10:45:28Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:Hartunka/tiny_bert_rand_50_v2", "base_model:finetune:Hartunka/tiny_bert_rand_50_v2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-22T10:44:24Z
--- library_name: transformers language: - en base_model: Hartunka/tiny_bert_rand_50_v2 tags: - generated_from_trainer datasets: - glue metrics: - spearmanr model-index: - name: tiny_bert_rand_50_v2_stsb results: - task: name: Text Classification type: text-classification dataset: name: GLUE STSB type: glue args: stsb metrics: - name: Spearmanr type: spearmanr value: 0.25802571079319986 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny_bert_rand_50_v2_stsb This model is a fine-tuned version of [Hartunka/tiny_bert_rand_50_v2](https://huggingface.co/Hartunka/tiny_bert_rand_50_v2) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 2.2828 - Pearson: 0.2634 - Spearmanr: 0.2580 - Combined Score: 0.2607 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:| | 3.577 | 1.0 | 23 | 2.3197 | 0.1206 | 0.1077 | 0.1142 | | 2.0557 | 2.0 | 46 | 2.4031 | 0.1291 | 0.1249 | 0.1270 | | 1.8854 | 3.0 | 69 | 2.3713 | 0.2039 | 0.1988 | 0.2013 | | 1.7118 | 4.0 | 92 | 2.3258 | 0.2474 | 0.2463 | 0.2469 | | 1.4486 | 5.0 | 115 | 2.2828 | 0.2634 | 0.2580 | 0.2607 | | 1.2898 | 6.0 | 138 | 2.7080 | 0.2622 | 0.2744 | 0.2683 | | 1.0578 | 7.0 | 161 | 2.6507 | 0.2815 | 0.2900 | 0.2857 | | 0.8953 | 8.0 | 184 | 2.8633 | 0.2585 | 0.2633 | 0.2609 | | 0.7584 | 9.0 | 207 | 3.1760 | 0.2421 | 0.2473 | 0.2447 | | 0.6589 | 10.0 | 230 | 3.0019 | 0.2613 | 0.2697 | 0.2655 | ### Framework versions - Transformers 4.50.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.21.1
Hartunka/tiny_bert_rand_50_v2_sst2
Hartunka
2025-04-22T10:44:13Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:Hartunka/tiny_bert_rand_50_v2", "base_model:finetune:Hartunka/tiny_bert_rand_50_v2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-22T10:40:13Z
--- library_name: transformers language: - en base_model: Hartunka/tiny_bert_rand_50_v2 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: tiny_bert_rand_50_v2_sst2 results: - task: name: Text Classification type: text-classification dataset: name: GLUE SST2 type: glue args: sst2 metrics: - name: Accuracy type: accuracy value: 0.8096330275229358 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny_bert_rand_50_v2_sst2 This model is a fine-tuned version of [Hartunka/tiny_bert_rand_50_v2](https://huggingface.co/Hartunka/tiny_bert_rand_50_v2) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.4656 - Accuracy: 0.8096 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4222 | 1.0 | 264 | 0.4656 | 0.8096 | | 0.2394 | 2.0 | 528 | 0.5868 | 0.7913 | | 0.1886 | 3.0 | 792 | 0.5450 | 0.7913 | | 0.1585 | 4.0 | 1056 | 0.6403 | 0.7947 | | 0.1306 | 5.0 | 1320 | 0.6642 | 0.7856 | | 0.1116 | 6.0 | 1584 | 0.7983 | 0.7810 | ### Framework versions - Transformers 4.50.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.21.1
ASethi04/Qwen-Qwen2.5-7B-legalbench-second-lora
ASethi04
2025-04-22T10:43:17Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-7B", "base_model:finetune:Qwen/Qwen2.5-7B", "endpoints_compatible", "region:us" ]
null
2025-04-22T10:01:28Z
--- base_model: Qwen/Qwen2.5-7B library_name: transformers model_name: Qwen-Qwen2.5-7B-legalbench-second-lora tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for Qwen-Qwen2.5-7B-legalbench-second-lora This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ASethi04/Qwen-Qwen2.5-7B-legalbench-second-lora", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/crm0b3ex) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
genki10/BERT_V8_sp20_lw10_ex40_lo00_k3_k3_fold4
genki10
2025-04-22T10:42:33Z
8
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-22T05:20:49Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: BERT_V8_sp20_lw10_ex40_lo00_k3_k3_fold4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERT_V8_sp20_lw10_ex40_lo00_k3_k3_fold4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7159 - Qwk: 0.4764 - Mse: 0.7159 - Rmse: 0.8461 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | No log | 1.0 | 3 | 9.9626 | 0.0058 | 9.9626 | 3.1564 | | No log | 2.0 | 6 | 6.7478 | 0.0 | 6.7478 | 2.5976 | | No log | 3.0 | 9 | 4.4476 | 0.0171 | 4.4476 | 2.1089 | | No log | 4.0 | 12 | 2.7613 | 0.0017 | 2.7613 | 1.6617 | | No log | 5.0 | 15 | 1.7925 | 0.1058 | 1.7925 | 1.3388 | | No log | 6.0 | 18 | 1.3271 | 0.0445 | 1.3271 | 1.1520 | | No log | 7.0 | 21 | 0.8937 | 0.1101 | 0.8937 | 0.9453 | | No log | 8.0 | 24 | 1.1438 | 0.0420 | 1.1438 | 1.0695 | | No log | 9.0 | 27 | 0.8025 | 0.3116 | 0.8025 | 0.8958 | | No log | 10.0 | 30 | 0.6916 | 0.3247 | 0.6916 | 0.8316 | | No log | 11.0 | 33 | 0.7668 | 0.3417 | 0.7668 | 0.8757 | | No log | 12.0 | 36 | 0.6498 | 0.4704 | 0.6498 | 0.8061 | | No log | 13.0 | 39 | 0.6564 | 0.4719 | 0.6564 | 0.8102 | | No log | 14.0 | 42 | 0.7259 | 0.4269 | 0.7259 | 0.8520 | | No log | 15.0 | 45 | 0.7218 | 0.4043 | 0.7218 | 0.8496 | | No log | 16.0 | 48 | 0.8552 | 0.3535 | 0.8552 | 0.9248 | | No log | 17.0 | 51 | 0.7304 | 0.4696 | 0.7304 | 0.8546 | | No log | 18.0 | 54 | 0.6908 | 0.5048 | 0.6908 | 0.8311 | | No log | 19.0 | 57 | 0.7224 | 0.4435 | 0.7224 | 0.8499 | | No log | 20.0 | 60 | 0.9652 | 0.4156 | 0.9652 | 0.9824 | | No log | 21.0 | 63 | 0.6498 | 0.5150 | 0.6498 | 0.8061 | | No log | 22.0 | 66 | 0.7324 | 0.5090 | 0.7324 | 0.8558 | | No log | 23.0 | 69 | 0.6695 | 0.5697 | 0.6695 | 0.8182 | | No log | 24.0 | 72 | 0.7989 | 0.4764 | 0.7989 | 0.8938 | | No log | 25.0 | 75 | 0.6845 | 0.5383 | 0.6845 | 0.8273 | | No log | 26.0 | 78 | 0.7717 | 0.4960 | 0.7717 | 0.8784 | | No log | 27.0 | 81 | 0.7915 | 0.4179 | 0.7915 | 0.8896 | | No log | 28.0 | 84 | 0.9335 | 0.4819 | 0.9335 | 0.9662 | | No log | 29.0 | 87 | 0.8089 | 0.5257 | 0.8089 | 0.8994 | | No log | 30.0 | 90 | 1.3363 | 0.3700 | 1.3363 | 1.1560 | | No log | 31.0 | 93 | 0.9225 | 0.4777 | 0.9225 | 0.9605 | | No log | 32.0 | 96 | 0.7356 | 0.5675 | 0.7356 | 0.8577 | | No log | 33.0 | 99 | 1.0873 | 0.3953 | 1.0873 | 1.0427 | | No log | 34.0 | 102 | 1.1558 | 0.3617 | 1.1558 | 1.0751 | | No log | 35.0 | 105 | 0.7202 | 0.5536 | 0.7202 | 0.8486 | | No log | 36.0 | 108 | 0.6805 | 0.5531 | 0.6805 | 0.8249 | | No log | 37.0 | 111 | 0.8864 | 0.4358 | 0.8864 | 0.9415 | | No log | 38.0 | 114 | 0.7002 | 0.5617 | 0.7002 | 0.8368 | | No log | 39.0 | 117 | 0.6441 | 0.5606 | 0.6441 | 0.8026 | | No log | 40.0 | 120 | 0.9972 | 0.4011 | 0.9972 | 0.9986 | | No log | 41.0 | 123 | 0.7535 | 0.4166 | 0.7535 | 0.8681 | | No log | 42.0 | 126 | 0.6324 | 0.5513 | 0.6324 | 0.7952 | | No log | 43.0 | 129 | 0.8189 | 0.3924 | 0.8189 | 0.9049 | | No log | 44.0 | 132 | 0.8369 | 0.3995 | 0.8369 | 0.9148 | | No log | 45.0 | 135 | 0.7035 | 0.5719 | 0.7035 | 0.8388 | | No log | 46.0 | 138 | 0.6622 | 0.5446 | 0.6622 | 0.8138 | | No log | 47.0 | 141 | 0.9300 | 0.4000 | 0.9300 | 0.9644 | | No log | 48.0 | 144 | 0.7179 | 0.5148 | 0.7179 | 0.8473 | | No log | 49.0 | 147 | 0.6759 | 0.5571 | 0.6759 | 0.8222 | | No log | 50.0 | 150 | 0.6832 | 0.5300 | 0.6832 | 0.8266 | | No log | 51.0 | 153 | 0.8732 | 0.4095 | 0.8732 | 0.9344 | | No log | 52.0 | 156 | 0.6637 | 0.5313 | 0.6637 | 0.8147 | | No log | 53.0 | 159 | 0.7607 | 0.4691 | 0.7607 | 0.8722 | | No log | 54.0 | 162 | 0.9066 | 0.4084 | 0.9066 | 0.9522 | | No log | 55.0 | 165 | 0.6816 | 0.5154 | 0.6816 | 0.8256 | | No log | 56.0 | 168 | 0.6782 | 0.5341 | 0.6782 | 0.8235 | | No log | 57.0 | 171 | 0.8924 | 0.4030 | 0.8924 | 0.9447 | | No log | 58.0 | 174 | 0.7817 | 0.4561 | 0.7817 | 0.8842 | | No log | 59.0 | 177 | 0.6460 | 0.5239 | 0.6460 | 0.8037 | | No log | 60.0 | 180 | 0.6717 | 0.5171 | 0.6717 | 0.8196 | | No log | 61.0 | 183 | 0.8408 | 0.4334 | 0.8408 | 0.9169 | | No log | 62.0 | 186 | 0.6663 | 0.5034 | 0.6663 | 0.8163 | | No log | 63.0 | 189 | 0.6753 | 0.5094 | 0.6753 | 0.8217 | | No log | 64.0 | 192 | 0.7441 | 0.4876 | 0.7441 | 0.8626 | | No log | 65.0 | 195 | 0.6463 | 0.5216 | 0.6463 | 0.8040 | | No log | 66.0 | 198 | 0.6749 | 0.5033 | 0.6749 | 0.8215 | | No log | 67.0 | 201 | 0.7972 | 0.4471 | 0.7972 | 0.8929 | | No log | 68.0 | 204 | 0.7144 | 0.4939 | 0.7144 | 0.8452 | | No log | 69.0 | 207 | 0.6627 | 0.5316 | 0.6627 | 0.8141 | | No log | 70.0 | 210 | 0.7900 | 0.4594 | 0.7900 | 0.8888 | | No log | 71.0 | 213 | 0.7505 | 0.4876 | 0.7505 | 0.8663 | | No log | 72.0 | 216 | 0.6681 | 0.5140 | 0.6681 | 0.8174 | | No log | 73.0 | 219 | 0.6665 | 0.5024 | 0.6665 | 0.8164 | | No log | 74.0 | 222 | 0.7426 | 0.4558 | 0.7426 | 0.8617 | | No log | 75.0 | 225 | 0.7159 | 0.4764 | 0.7159 | 0.8461 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
kk-aivio/259198bc-0d16-4bc9-b459-d567fda6d0c2
kk-aivio
2025-04-22T10:41:06Z
0
0
peft
[ "peft", "generated_from_trainer", "base_model:unsloth/gemma-1.1-2b-it", "base_model:adapter:unsloth/gemma-1.1-2b-it", "region:us" ]
null
2025-04-22T10:40:34Z
--- library_name: peft tags: - generated_from_trainer base_model: unsloth/gemma-1.1-2b-it model-index: - name: kk-aivio/259198bc-0d16-4bc9-b459-d567fda6d0c2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kk-aivio/259198bc-0d16-4bc9-b459-d567fda6d0c2 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5718 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.3 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
rajthakkar123/UI-TARS-1.5-7B-4bit-bnb
rajthakkar123
2025-04-22T10:41:00Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
image-text-to-text
2025-04-22T10:27:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
linoyts/HiDream-yarn-art-LoRA
linoyts
2025-04-22T10:40:31Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "hidream", "hidream-diffusers", "template:sd-lora", "base_model:HiDream-ai/HiDream-I1-Full", "base_model:adapter:HiDream-ai/HiDream-I1-Full", "license:mit", "region:us" ]
text-to-image
2025-04-22T09:09:56Z
--- base_model: HiDream-ai/HiDream-I1-Full library_name: diffusers license: mit instance_prompt: a dog, yarn art style widget: [] tags: - text-to-image - diffusers-training - diffusers - lora - hidream - hidream-diffusers - template:sd-lora --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # HiDream Image DreamBooth LoRA - linoyts/HiDream-yarn-art-LoRA <Gallery /> ## Model description These are linoyts/dog-hidream-lora-offload-mini-test DreamBooth LoRA weights for HiDream-ai/HiDream-I1-Full. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [HiDream Image diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_hidream.md). ## Trigger words You should use `yarn art style` to trigger the image generation. ## Download model [Download the *.safetensors LoRA](https://huggingface.co/linoyts/HiDream-yarn-art-LoRA/tree/main) in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py >>> import torch >>> from transformers import PreTrainedTokenizerFast, LlamaForCausalLM >>> from diffusers import HiDreamImagePipeline >>> tokenizer_4 = PreTrainedTokenizerFast.from_pretrained("meta-llama/Meta-Llama-3.1-8B-Instruct") >>> text_encoder_4 = LlamaForCausalLM.from_pretrained( ... "meta-llama/Meta-Llama-3.1-8B-Instruct", ... output_hidden_states=True, ... output_attentions=True, ... torch_dtype=torch.bfloat16, ... ) >>> pipe = HiDreamImagePipeline.from_pretrained( ... "HiDream-ai/HiDream-I1-Full", ... scheduler=scheduler, ... tokenizer_4=tokenizer_4, ... text_encoder_4=text_encoder_4, ... torch_dtype=torch.bfloat16, ... ) >>> pipe.enable_model_cpu_offload() >>> pipe.load_lora_weights(f"linoyts/HiDream-yarn-art-LoRA") >>> image = pipe(f"yoda, yarn art style").images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
dgambettaphd/M_llm3_gen6_run0_WXS_doc1000_synt64_MPP
dgambettaphd
2025-04-22T10:39:13Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-22T10:38:56Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Hartunka/distilbert_km_10_v2_rte
Hartunka
2025-04-22T10:37:59Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:Hartunka/distilbert_km_10_v2", "base_model:finetune:Hartunka/distilbert_km_10_v2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-22T10:37:24Z
--- library_name: transformers language: - en base_model: Hartunka/distilbert_km_10_v2 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: distilbert_km_10_v2_rte results: - task: name: Text Classification type: text-classification dataset: name: GLUE RTE type: glue args: rte metrics: - name: Accuracy type: accuracy value: 0.5126353790613718 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_km_10_v2_rte This model is a fine-tuned version of [Hartunka/distilbert_km_10_v2](https://huggingface.co/Hartunka/distilbert_km_10_v2) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.7064 - Accuracy: 0.5126 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7038 | 1.0 | 10 | 0.7064 | 0.5126 | | 0.6574 | 2.0 | 20 | 0.7129 | 0.5343 | | 0.6158 | 3.0 | 30 | 0.7406 | 0.4874 | | 0.538 | 4.0 | 40 | 0.7981 | 0.5235 | | 0.4361 | 5.0 | 50 | 0.9251 | 0.5090 | | 0.3292 | 6.0 | 60 | 1.1220 | 0.5199 | ### Framework versions - Transformers 4.50.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.21.1
gokulsrinivasagan/tinybert_train_book_ent_15p_ra_mnli
gokulsrinivasagan
2025-04-22T10:37:57Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:gokulsrinivasagan/tinybert_train_book_ent_15p_ra", "base_model:finetune:gokulsrinivasagan/tinybert_train_book_ent_15p_ra", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-22T09:57:06Z
--- library_name: transformers language: - en license: apache-2.0 base_model: gokulsrinivasagan/tinybert_train_book_ent_15p_ra tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: tinybert_train_book_ent_15p_ra_mnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE MNLI type: glue args: mnli metrics: - name: Accuracy type: accuracy value: 0.7026037428803905 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinybert_train_book_ent_15p_ra_mnli This model is a fine-tuned version of [gokulsrinivasagan/tinybert_train_book_ent_15p_ra](https://huggingface.co/gokulsrinivasagan/tinybert_train_book_ent_15p_ra) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.7014 - Accuracy: 0.7026 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.966 | 1.0 | 1534 | 0.8940 | 0.5812 | | 0.8563 | 2.0 | 3068 | 0.8119 | 0.6319 | | 0.7714 | 3.0 | 4602 | 0.7580 | 0.6703 | | 0.7038 | 4.0 | 6136 | 0.7431 | 0.6788 | | 0.6485 | 5.0 | 7670 | 0.7206 | 0.6931 | | 0.5986 | 6.0 | 9204 | 0.7499 | 0.6948 | | 0.553 | 7.0 | 10738 | 0.7412 | 0.6934 | | 0.5094 | 8.0 | 12272 | 0.7836 | 0.6959 | | 0.4692 | 9.0 | 13806 | 0.8306 | 0.7015 | | 0.4316 | 10.0 | 15340 | 0.8376 | 0.6971 | ### Framework versions - Transformers 4.51.2 - Pytorch 2.6.0+cu126 - Datasets 3.5.0 - Tokenizers 0.21.1
Hartunka/distilbert_km_10_v2_qqp
Hartunka
2025-04-22T10:37:14Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:Hartunka/distilbert_km_10_v2", "base_model:finetune:Hartunka/distilbert_km_10_v2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-22T09:56:09Z
--- library_name: transformers language: - en base_model: Hartunka/distilbert_km_10_v2 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: distilbert_km_10_v2_qqp results: - task: name: Text Classification type: text-classification dataset: name: GLUE QQP type: glue args: qqp metrics: - name: Accuracy type: accuracy value: 0.8151620084095968 - name: F1 type: f1 value: 0.7424435636739617 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_km_10_v2_qqp This model is a fine-tuned version of [Hartunka/distilbert_km_10_v2](https://huggingface.co/Hartunka/distilbert_km_10_v2) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.4022 - Accuracy: 0.8152 - F1: 0.7424 - Combined Score: 0.7788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.4781 | 1.0 | 1422 | 0.4353 | 0.7922 | 0.6916 | 0.7419 | | 0.3737 | 2.0 | 2844 | 0.4022 | 0.8152 | 0.7424 | 0.7788 | | 0.2966 | 3.0 | 4266 | 0.4022 | 0.8235 | 0.7638 | 0.7937 | | 0.2309 | 4.0 | 5688 | 0.4382 | 0.8306 | 0.7563 | 0.7935 | | 0.1801 | 5.0 | 7110 | 0.4994 | 0.8319 | 0.7657 | 0.7988 | | 0.1423 | 6.0 | 8532 | 0.5064 | 0.8275 | 0.7721 | 0.7998 | | 0.1143 | 7.0 | 9954 | 0.5898 | 0.8289 | 0.7747 | 0.8018 | ### Framework versions - Transformers 4.50.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.21.1
deswaq/juh62
deswaq
2025-04-22T10:36:52Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T10:33:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cortexso/deepseek-r1
cortexso
2025-04-22T10:35:00Z
78,618
0
null
[ "gguf", "cortexp.cpp", "featured", "text-generation", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-02-03T12:56:17Z
--- license: mit pipeline_tag: text-generation tags: - cortexp.cpp - featured --- ## Overview **DeepSeek** developed and released the **DeepSeek-R1** series, featuring multiple model sizes fine-tuned for high-performance text generation. These models are optimized for dialogue, reasoning, and information-seeking tasks, providing a balance of efficiency and accuracy while maintaining a smaller footprint compared to their original counterparts. The DeepSeek-R1 models include distilled and full-scale variants of both **Qwen** and **Llama** architectures, catering to various applications such as customer support, conversational AI, research, and enterprise automation. ## Variants ### DeepSeek-R1 | No | Variant | Branch | Cortex CLI command | | -- | ---------------------------------------------------------------------------------------------- | ------- | ------------------------------------------ | | 1 | [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/cortexso/deepseek-r1/tree/1.5b) | 1.5b | `cortex run deepseek-r1:1.5b` | | 2 | [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/cortexso/deepseek-r1/tree/7b) | 7b | `cortex run deepseek-r1:7b` | | 3 | [DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/cortexso/deepseek-r1/tree/8b) | 8b | `cortex run deepseek-r1:8b` | | 4 | [DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/cortexso/deepseek-r1/tree/14b) | 14b | `cortex run deepseek-r1:14b` | | 5 | [DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/cortexso/deepseek-r1/tree/32b) | 32b | `cortex run deepseek-r1:32b` | | 6 | [DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/cortexso/deepseek-r1/tree/70b) | 70b | `cortex run deepseek-r1:70b` | Each branch contains a default quantized version: - **Qwen-1.5B:** q4-km - **Qwen-7B:** q4-km - **Llama-8B:** q4-km - **Qwen-14B:** q4-km - **Qwen-32B:** q4-km - **Llama-70B:** q4-km ## Use it with Jan (UI) 1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart) 2. Use in Jan model Hub: ```text cortexso/deepseek-r1 ``` ## Use it with Cortex (CLI) 1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart) 2. Run the model with command: ```bash cortex run deepseek-r1 ``` ## Credits - **Author:** DeepSeek - **Converter:** [Homebrew](https://www.homebrew.ltd/) - **Original License:** [License](https://huggingface.co/deepseek-ai/DeepSeek-R1#license) - **Papers:** [DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning](https://arxiv.org/html/2501.12948v1)
lawrencerobertasparks4hvg/dfbdfb
lawrencerobertasparks4hvg
2025-04-22T10:34:14Z
0
0
null
[ "license:bsd-3-clause", "region:us" ]
null
2025-04-22T10:34:14Z
--- license: bsd-3-clause ---
prepro1/Llama-3.2-3B-Instruct_LORA_1d
prepro1
2025-04-22T10:32:37Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-04-22T10:31:31Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** prepro1 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
BoysonH45682/vbxvxvzx
BoysonH45682
2025-04-22T10:31:25Z
0
0
null
[ "license:cc-by-sa-3.0", "region:us" ]
null
2025-04-22T10:31:25Z
--- license: cc-by-sa-3.0 ---
ZeroAgency/Zero-Mistral-24B
ZeroAgency
2025-04-22T10:30:58Z
0
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "chat", "conversational", "ru", "en", "dataset:ZeroAgency/ru-big-russian-dataset", "arxiv:1910.09700", "base_model:mistralai/Mistral-Small-3.1-24B-Instruct-2503", "base_model:finetune:mistralai/Mistral-Small-3.1-24B-Instruct-2503", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-21T15:49:20Z
--- license: mit datasets: - ZeroAgency/ru-big-russian-dataset language: - ru - en tags: - mistral - chat - conversational - transformers inference: parameters: temperature: 0 pipeline_tag: text-generation base_model: - mistralai/Mistral-Small-3.1-24B-Instruct-2503 library_name: transformers --- # Model Card for Zero-Mistral-24B **Zero-Mistral-24B** is an improved TEXT-only version of [mistralai/Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503), primarily adapted for Russian and English languages. The original Mistral model contains vision features which were removed from this model. The training involved SFT stage primarily on [Big Russian Dataset](https://huggingface.co/datasets/ZeroAgency/ru-big-russian-dataset) dataset and proprietary dataset from [Shkolkovo.online](https://shkolkovo.online/?utm_source=hf). The model has good math skills and some reasoning abilities. The modele saves original mistral long context capabilities up to 128k tokens. ## Model Details ![image/png](https://huggingface.co/ZeroAgency/Zero-Mistral-24B/resolve/main/zero-mistral-500.png) ### Model Description - **Developed by:** [ZeroAgency.ru](https://zeroagency.ru/?utm_source=hf) - **Funded by:** [ZeroAgency.ru](https://zeroagency.ru/?utm_source=hf) and [Shkolkovo.online](https://shkolkovo.online/?utm_source=hf) - **Shared by:** [Alexander Kozhevnikov](https://t.me/ak_segfault) (developer) - **Model type:** LLM - **Language(s) (NLP):** Russian, English - **License:** MIT - **Finetuned from model:** [mistralai/Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503) ### 📚 Model versions - [Merged 16-bit](https://huggingface.co/ZeroAgency/Zero-Mistral-24B) - original 16bit merged version for transformers. - [GGUF](https://huggingface.co/ZeroAgency/Zero-Mistral-24B-gguf) - different GGUF versions: BF16, F16, Q8_0, Q6_K, Q4_K_M, IQ4_XS, etc. ## 📊 Benchmarks for main 16-bit merged version ### MERA **MERA score**: `0.623` | Task | Result | Metric | |--------------|----------------------|--------------------| | LCS | 0.194 | Accuracy | | RCB | 0.607 / 0.592 | Avg. F1 / Accuracy | | USE | 0.452 | Grade Norm | | RWSD | 0.55 | Accuracy | | PARus | 0.942 | Accuracy | | ruTiE | 0.868 | Accuracy | | MultiQ | 0.781 / 0.629 | F1-score/EM | | CheGeKa | 0.397 / 0.322 | F1 / EM | | ruModAr | 0.971 | EM | | MaMuRAMu | 0.832 | Accuracy | | ruMultiAr | 0.354 | EM | | ruCodeEval | 0 / 0 / 0 | pass@k `¯\_(ツ)_/¯`| | MathLogicQA | 0.613 | Accuracy | | ruWorldTree | 0.987 / 0.987 | Avg. F1 / Accuracy | | ruOpenBookQA | 0.913 / 0.913 | Avg. F1 / Accuracy | Оценка по открытым задачам: | Задача | Результат | Метрика | |--------------|---------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------| | BPS | 0.981 | Accuracy | | ruMMLU | 0.778 | Accuracy | | SimpleAr | 0.997 | EM | | ruHumanEval | 0.006 / 0.006 / 0.006 | pass@k `¯\_(ツ)_/¯` | | ruHHH | 0.916 | Accuracy | | ruHateSpeech | 0.834 | Accuracy | | ruDetox | 0.341 / 0.843 / 0.624 / 0.66 | Общая средняя оценка (J) / Оценка сохранения смысла (SIM) / Оценка натуральности (FL) / Точность переноса стиля (STA) | | ruEthics | [[0.386, 0.399, 0.41, 0.333, 0.327], [0.421, 0.427, 0.452, 0.375, 0.363], [0.653, 0.65, 0.697, 0.596, 0.573]] | 5 MCC | ## Usage The model can be used with the following frameworks; - [`vllm`](https://github.com/vllm-project/vllm): See [here](#vllm) - [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers) - [`llama.cpp`](https://github.com/ggml-org/llama.cpp): See [here](#llama-server) ### Recommended system prompts ```python prompts = { "generic": "Ты виртуальный ассистент. Ты отвечаешь на вопросы людей, помогаешь им и поддерживаешь. Ты создан, чтобы быть полезным, безобидным и честным. Ты отвечаешь на том языке, на котором был задан вопрос или попросил пользователь.", "think": """Ты виртуальный ассистент. Ты отвечаешь на вопросы людей, помогаешь им и поддерживаешь. Ты создан, чтобы быть полезным, безобидным и честным. Ты отвечаешь на том языке, на котором был задан вопрос или попросил пользователь. Answer in the following format: <think>Reasoning: ...</think> ...""", "task": "Ты виртуальный ассистент. Ты отвечаешь на вопросы людей, помогаешь им и поддерживаешь. Ты создан, чтобы быть полезным, безобидным и честным. Ты отвечаешь на том языке, на котором был задан вопрос или попросил пользователь. Реши задачу по инструкции ниже. Не извиняйся, не строй диалог.", "task_think": """Ты виртуальный ассистент. Ты отвечаешь на вопросы людей, помогаешь им и поддерживаешь. Ты создан, чтобы быть полезным, безобидным и честным. Ты отвечаешь на том языке, на котором был задан вопрос или попросил пользователь. Реши задачу по инструкции ниже. Не извиняйся, не строй диалог. Answer in the following format: <think>Reasoning: ...</think> ...""", "english_generic": """You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris. Your knowledge base was last updated on 2023-10-01. The current date is 2025-01-30. When you're not sure about some information, you say that you don't have the information and don't make up anything. If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \"What are some good restaurants around me?\" => \"Where are you?\" or \"When is the next flight to Tokyo\" => \"Where do you travel from?\") """, "english_think": """You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris. Your knowledge base was last updated on 2023-10-01. The current date is 2025-01-30. When you're not sure about some information, you say that you don't have the information and don't make up anything. If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \"What are some good restaurants around me?\" => \"Where are you?\" or \"When is the next flight to Tokyo\" => \"Where do you travel from?\") Answer in the following format: <think>Reasoning: ...</think> """, } ``` ### vLLM We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm) to implement production-ready inference pipelines. **Note 1**: We recommond using a relatively low temperature, such as `temperature=0.15`. **Note 2**: Make sure to add a system prompt to the model to best tailer it for your needs. If you want to use the model as a general assistant, we recommend the following system prompt: ``` system_prompt = """You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris. Your knowledge base was last updated on 2023-10-01. The current date is 2025-01-30. When you're not sure about some information, you say that you don't have the information and don't make up anything. If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \"What are some good restaurants around me?\" => \"Where are you?\" or \"When is the next flight to Tokyo\" => \"Where do you travel from?\") ``` **Note 3**: flash_attn or flashinfer-python preferred for better performance. **_Installation_** Make sure you install [`vLLM >= 0.8.4`](https://github.com/vllm-project/vllm/releases/tag/v0.8.4): ``` pip install --upgrade vllm ``` Also make sure you have [`mistral_common >= 1.5.4`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.4) installed: ``` pip install --upgrade mistral_common ``` You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/r/vllm/vllm-openai/tags). #### Server We recommand that you use ZeroAgency/Zero-Mistral-24B in a server/client setting. 1. Spin up a server: ``` vllm serveZeroAgency/Zero-Mistral-24B --enable-prefix-caching --dtype bfloat16 --max-model-len 32768 --tool-call-parser mistral --enable-auto-tool-choice ``` **Note:** Running Zero-Mistral-24B on GPU requires ~55 GB of GPU RAM in bf16 or fp16. 2. To ping the client you can use a simple Python snippet. ```py import requests import json from datetime import datetime, timedelta url = "http://<your-server>:8000/v1/chat/completions" headers = {"Content-Type": "application/json", "Authorization": "Bearer token"} model = "ZeroAgency/Zero-Mistral-24B" messages = [ { "role": "system", "content": """Ты виртуальный ассистент. Ты отвечаешь на вопросы людей, помогаешь им и поддерживаешь. Ты создан, чтобы быть полезным, безобидным и честным. Ты отвечаешь на том языке, на котором был задан вопрос или попросил пользователь. Реши задачу по инструкции ниже. Не извиняйся, не строй диалог. Answer in the following format: <think>Reasoning: ...</think> ...""" }, { # Task from https://3.shkolkovo.online/catalog/2552/93150 "role": "user", "content": """Первый рабочий за час делает на 9 деталей больше, чем второй, и выполняет заказ, состоящий из 216 деталей, на 4 часа быстрее, чем второй рабочий, выполняющий такой же заказ. Сколько деталей в час делает первый рабочий?""" }, ] data = {"model": model, "messages": messages} response = requests.post(url, headers=headers, data=json.dumps(data)) print(response.json()["choices"][0]["message"]["content"]) #<think> Пусть x — количество деталей, которые делает второй рабочий за час. Тогда первый рабочий делает x + 9 деталей за час. Составим таблицу: Первый рабочий Второй рабочий Количество деталей в час x + 9 x Количество часов 216 : (x + 9) 216 : x Разность количества часов 4 216 : (x + 9) − 216 : x = 4 216x − 216(x + 9) = 4x(x + 9) 216x − 216x − 1944 = 4x^2 + 36x 1944 = 4x^2 + 36x 4x^2 + 36x − 1944 = 0 D = 36^2 + 4 · 4 · 1944 = 1296 + 31104 = 32400 = 180^2 x1 = −36 + 180 : 8 = 144 : 8 = 18 x2 = −36 − 180 : 8 < 0 — не подходит по смыслу задачи. Тогда первый рабочий делает 18 + 9 = 27 деталей в час. </think> #27 ``` #### Offline ```py from vllm import LLM from vllm.sampling_params import SamplingParams from datetime import datetime, timedelta # note that running this model on GPU requires over 60 GB of GPU RAM llm = LLM(model="ZeroAgency/Zero-Mistral-24B", tokenizer_mode="mistral", tensor_parallel_size=8) SYSTEM_PROMPT = """Ты виртуальный ассистент. Ты отвечаешь на вопросы людей, помогаешь им и поддерживаешь. Ты создан, чтобы быть полезным, безобидным и честным. Ты отвечаешь на том языке, на котором был задан вопрос или попросил пользователь. Answer in the following format: <think>Reasoning: ...</think> ...""" user_prompt = """Что больше 9.9 или 9.11?""" messages = [ { "role": "system", "content": SYSTEM_PROMPT }, { "role": "user", "content": user_prompt }, ] sampling_params = SamplingParams(max_tokens=512, temperature=0.0, top_p=1, top_k=-1) outputs = llm.chat(messages, sampling_params=sampling_params) print(outputs[0].outputs[0].text) #<think> Задача: Сравните 9.9 и 9.11 для определения того, какой из них больше Подход: Десятичное сравнение с выравниванием десятичных точек Сложность: Низкий к среднему Я должен тщательно выровнять десятичные точки и сравнить цифры по месту. 1. Выровнять десятичные точки: 9.90 9.11 2. Сравните целые числа: оба имеют 9, поэтому они равны 3. Сравните десятые места: 9.90 имеет 9, 9.11 имеет 1 9 &gt; 1, поэтому 9.90 больше 4. Сравните сотые места: 9.90 имеет 0, 9.11 имеет 1 0 &lt; 1, но это не имеет значения, поскольку десятое место уже определило большее число<reflection>Я правильно выровнял десятичные точки и сравнил цифры по месту. Я заметил, что десятое место (9 против 1) определило, что 9.9 больше, чем 9.11. Сотые места не были необходимы для этого сравнения.</reflection> <self_improvement>В будущих сравнениях я буду уделять первоочередное внимание самым левым цифрам, где есть разница, чтобы оптимизировать процесс сравнения.</self_improvement> </think> 9.9 больше, чем 9.11. Когда вы сравниваете десятичные числа, вы начинаете с целых чисел, затем переходите к десятым местам, сотым местам и так далее. В этом случае 9.9 имеет 9 в десятом месте, в то время как 9.11 имеет 1 в десятом месте. Поскольку 9 &gt; 1, 9.9 больше, чем 9.11. ``` ### Transformers If you want to use Hugging Face transformers to generate text, you can do something like this. ```py from transformers import pipeline import torch messages = [ {"role": "user", "content": "Что больше 9.9 или 9.11?"}, ] chatbot = pipeline("text-generation", model="ZeroAgency/Zero-Mistral-24B", max_new_tokens=256, torch_dtype=torch.bfloat16) response = chatbot(messages, temperature=0.1) print(response[0]['generated_text'][1]['content']) # 9.9 больше, чем 9.11. ``` ### llama-server You can run llama-server - OpenAI compatible server for serving [GGUF version](https://huggingface.co/ZeroAgency/Zero-Mistral-24B-gguf) of model. Example of running with docker container: ``` docker run --gpus all -v `pwd`:/mnt -p8000:8000 ghcr.io/ggml-org/llama.cpp:server-cuda -fa --port 8000 --host 0.0.0.0 --temp 0.0 --jinja -ngl 100 --api-key DUMMY-API-KEY -m /mnt/Zero-Mistral-24B-Q4_K_M_L.gguf ``` ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** 8x H200 - **Hours used:** 29.5 - **Cloud Provider:** Runpod - **Compute Region:** US-DE - **Carbon Emitted:** `¯\_(ツ)_/¯`
likui34/88
likui34
2025-04-22T10:27:15Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2025-04-22T10:27:15Z
--- license: bigscience-bloom-rail-1.0 ---
zhuangshun45/16
zhuangshun45
2025-04-22T10:26:51Z
0
0
null
[ "license:cc-by-sa-3.0", "region:us" ]
null
2025-04-22T10:26:50Z
--- license: cc-by-sa-3.0 ---
leihen35/41
leihen35
2025-04-22T10:26:45Z
0
0
null
[ "license:bigcode-openrail-m", "region:us" ]
null
2025-04-22T10:26:45Z
--- license: bigcode-openrail-m ---
ASethi04/google-gemma-2-9b-legalbench-first-lora
ASethi04
2025-04-22T10:25:23Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-2-9b", "base_model:finetune:google/gemma-2-9b", "endpoints_compatible", "region:us" ]
null
2025-04-22T09:19:38Z
--- base_model: google/gemma-2-9b library_name: transformers model_name: google-gemma-2-9b-legalbench-first-lora tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for google-gemma-2-9b-legalbench-first-lora This model is a fine-tuned version of [google/gemma-2-9b](https://huggingface.co/google/gemma-2-9b). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ASethi04/google-gemma-2-9b-legalbench-first-lora", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/aatqx7rs) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
yan42g/yannicklora2
yan42g
2025-04-22T10:24:37Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-04-22T09:16:55Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: yannick --- # Yannicklora2 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `yannick` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "yannick", "lora_weights": "https://huggingface.co/yan42g/yannicklora2/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('yan42g/yannicklora2', weight_name='lora.safetensors') image = pipeline('yannick').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 3000 - Learning rate: 0.0004 - LoRA rank: 80 ## Contribute your own examples You can use the [community tab](https://huggingface.co/yan42g/yannicklora2/discussions) to add images that show off what you’ve made with this LoRA.
OnomaAIResearch/Illustrious-Lumina-v0.03
OnomaAIResearch
2025-04-22T10:24:24Z
0
34
null
[ "arxiv:2503.21758", "base_model:Alpha-VLLM/Lumina-Image-2.0", "base_model:finetune:Alpha-VLLM/Lumina-Image-2.0", "license:apache-2.0", "region:us" ]
null
2025-04-16T07:19:02Z
--- license: apache-2.0 base_model: - Alpha-VLLM/Lumina-Image-2.0 --- # Illustrious-Lumina-v0.03 This model is based on Alpha-VLLM/Lumina-Image-2.0 , which is nice small DiT model with minimal guaranteed functionality! Please refer to https://github.com/Alpha-VLLM/Lumina-Image-2.0 for official repository. [Paper](https://arxiv.org/abs/2503.21758) --- Before we dive into the details of 'Illustrious-Lumina-v0.03', we’re excited to share that you can now generate images directly with our Illustrious XL models on our official site: [illustrious-xl.ai](http://illustrious-xl.ai/). We’ve launched a full image generation platform featuring high-res outputs, natural language prompting, and custom presets - plus, several exclusive models you won’t find on any other hub. Explore our updated model tiers and naming here: [Model Series](https://www.illustrious-xl.ai/updates/20). Need help getting started? Check out our generation user guide: [ILXL Image Generation User Guide](https://www.illustrious-xl.ai/updates/21). --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63398de08f27255b6b50081a/OamvrbyYicsGvp2ShVoaq.png) ## 1. Model Overview - **Architecture**: **2 B parameters** DiT. - **Text Encoder**: Pure LLM, **Gemma-2-2b ** - **Goal of this fork**: We test if the image backbone can learn illustration concepts **without** re‑training the LLM component. --- **Illustrious-Lumina-v0.03** is experimental epoch of Lumina-2.0 based training session, to validate whether we would be able to achieve small DiT model just with LLM - to be trained as illustration-focused model. The original model, is unfortunately bad at illustrations and lacked any of the knowledge - so the run focused on training abscent knowledges. After 26,500 step, the model, Illustrious-Lumina-v0.03 has show successful fast adaptation toward the dataset. However, please note that the original model is not good at illustrations, whileas our focus is only in illustrations - this would take a while to reach the certain level. The examples are ready in [Blog post](https://www.illustrious-xl.ai/blog). To test the model, please refer to the [huggingface space](https://huggingface.co/spaces/AngelBottomless/Lumina-Illustrious-v0.03) If you prefer to run model locally, please use the **pth file** with [official installation guide](https://github.com/OnomaAI/Illustrious-Lumina). **The safetensors file is meant to only "contain the weights" - for comfyui-compatible format, we will try to prepare it as soon as possible.** ## 2. Training Setup | Item | Value | |------|-------| | Images Seen Total | 22 M image–text pairs | | Steps | 26  500 | | Global batch | 768 | | Resolution | 1024, 256 | | Checkpoint | `Illustrious_Lumina_2b_22100_ema_unified_fp32.safetensors` | The model has seen 22M image-text pairs. To accelerate the training, multi-resolution training was utilized. ## 3. Inference Demo Code If you prefer to run model locally, please use the **pth file** with [official installation guide](https://github.com/OnomaAI/Illustrious-Lumina). The setup used for header image can be replicated with following setup: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63398de08f27255b6b50081a/qmFDOkCiu-gjG4X0r2ydO.png) ## 4. Disclaimer The model does not reflect any final product, and intended to be used for research analysis only. The model is not production-ready; use as own risk. The model is in Proof Of Concept stage- supposedly, 3% of the compute required for full training, with only 22M samples seen with low-resolution joint training, with A6000 GPUs. For training acceleration, please consider supporting us in [Support site](https://illustrious-xl.ai/model/17)!
ASethi04/Qwen-Qwen2.5-7B-opc-sft-first-lora
ASethi04
2025-04-22T10:24:14Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-7B", "base_model:finetune:Qwen/Qwen2.5-7B", "endpoints_compatible", "region:us" ]
null
2025-04-22T09:20:19Z
--- base_model: Qwen/Qwen2.5-7B library_name: transformers model_name: Qwen-Qwen2.5-7B-opc-sft-first-lora tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for Qwen-Qwen2.5-7B-opc-sft-first-lora This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ASethi04/Qwen-Qwen2.5-7B-opc-sft-first-lora", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/rr8jhlaf) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
VanwyeD81367/ZxZCxc
VanwyeD81367
2025-04-22T10:23:47Z
0
0
null
[ "license:bsd-2-clause", "region:us" ]
null
2025-04-22T10:23:47Z
--- license: bsd-2-clause ---
rio11user/phone_20_3_v7
rio11user
2025-04-22T10:20:28Z
0
0
null
[ "safetensors", "bert", "region:us" ]
null
2025-04-22T10:20:01Z
# phone_20_3_v7 - モデル: SimCSE (BERTベース) - タスク: 次の発話予測 (15秒会話 → 次の15秒) - データ数: 約3500件 - 学習方式: 教師ありSimCSE + Cross Entropy
CLEAR-Global/w2v-bert-2.0-chichewa_34_307h
CLEAR-Global
2025-04-22T10:16:20Z
3
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2-bert", "automatic-speech-recognition", "CLEAR-Global/chichewa_34_307h", "generated_from_trainer", "base_model:facebook/w2v-bert-2.0", "base_model:finetune:facebook/w2v-bert-2.0", "license:mit", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-04-21T15:34:54Z
--- library_name: transformers license: mit base_model: facebook/w2v-bert-2.0 tags: - automatic-speech-recognition - CLEAR-Global/chichewa_34_307h - generated_from_trainer metrics: - wer model-index: - name: w2v-bert-2.0-chichewa_34_307h results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # w2v-bert-2.0-chichewa_34_307h This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the CLEAR-GLOBAL/CHICHEWA_34_307H - NA dataset. It achieves the following results on the evaluation set: - Loss: 0.2792 - Wer: 0.3856 - Cer: 0.1100 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 100000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-------:|:-----:|:---------------:|:------:|:------:| | 2.7235 | 0.2896 | 1000 | 2.9405 | 0.9854 | 0.8901 | | 0.1802 | 0.5792 | 2000 | 0.9285 | 0.6857 | 0.2027 | | 0.1404 | 0.8688 | 3000 | 0.6584 | 0.5723 | 0.1737 | | 0.0446 | 1.1584 | 4000 | 0.5458 | 0.5495 | 0.1613 | | 0.051 | 1.4480 | 5000 | 0.5079 | 0.5297 | 0.1528 | | 0.0326 | 1.7376 | 6000 | 0.5507 | 0.5111 | 0.1529 | | 0.033 | 2.0272 | 7000 | 0.4940 | 0.4774 | 0.1412 | | 0.0341 | 2.3168 | 8000 | 0.4784 | 0.4954 | 0.1410 | | 0.0308 | 2.6064 | 9000 | 0.4140 | 0.4981 | 0.1390 | | 0.0216 | 2.8960 | 10000 | 0.3997 | 0.4689 | 0.1340 | | 0.0262 | 3.1856 | 11000 | 0.3943 | 0.4716 | 0.1374 | | 0.0216 | 3.4752 | 12000 | 0.3600 | 0.4463 | 0.1306 | | 0.0137 | 3.7648 | 13000 | 0.3348 | 0.4286 | 0.1236 | | 0.0154 | 4.0544 | 14000 | 0.3559 | 0.4290 | 0.1247 | | 0.0147 | 4.3440 | 15000 | 0.3498 | 0.4234 | 0.1232 | | 0.0334 | 4.6337 | 16000 | 0.3606 | 0.4261 | 0.1236 | | 0.0097 | 4.9233 | 17000 | 0.3384 | 0.4054 | 0.1176 | | 0.0099 | 5.2129 | 18000 | 0.3286 | 0.4323 | 0.1237 | | 0.0167 | 5.5025 | 19000 | 0.3260 | 0.4192 | 0.1210 | | 0.0097 | 5.7921 | 20000 | 0.3196 | 0.4198 | 0.1220 | | 0.0101 | 6.0817 | 21000 | 0.3173 | 0.4121 | 0.1177 | | 0.0152 | 6.3713 | 22000 | 0.3083 | 0.3943 | 0.1132 | | 0.0116 | 6.6609 | 23000 | 0.3192 | 0.4119 | 0.1157 | | 0.0165 | 6.9505 | 24000 | 0.3216 | 0.4117 | 0.1186 | | 0.0071 | 7.2401 | 25000 | 0.3019 | 0.3828 | 0.1134 | | 0.0125 | 7.5297 | 26000 | 0.3002 | 0.3975 | 0.1144 | | 0.0056 | 7.8193 | 27000 | 0.3025 | 0.3924 | 0.1131 | | 0.0137 | 8.1089 | 28000 | 0.2918 | 0.3876 | 0.1122 | | 0.0062 | 8.3985 | 29000 | 0.2874 | 0.3845 | 0.1138 | | 0.0066 | 8.6881 | 30000 | 0.2793 | 0.3847 | 0.1100 | | 0.0181 | 8.9777 | 31000 | 0.2827 | 0.3642 | 0.1070 | | 0.0045 | 9.2673 | 32000 | 0.2890 | 0.3878 | 0.1152 | | 0.0043 | 9.5569 | 33000 | 0.3049 | 0.4021 | 0.1164 | | 0.0113 | 9.8465 | 34000 | 0.2855 | 0.3759 | 0.1085 | | 0.0119 | 10.1361 | 35000 | 0.2992 | 0.3782 | 0.1120 | ### Framework versions - Transformers 4.48.1 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
sleepdeprived3/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B-Q3_K_M-GGUF
sleepdeprived3
2025-04-22T10:16:08Z
0
0
null
[ "gguf", "nsfw", "explicit", "roleplay", "unaligned", "ERP", "Erotic", "Horror", "Violence", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:ReadyArt/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B", "base_model:merge:ReadyArt/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-04-22T10:15:14Z
--- base_model: ReadyArt/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B language: - en license: apache-2.0 pipeline_tag: text-generation tags: - nsfw - explicit - roleplay - unaligned - ERP - Erotic - Horror - Violence - llama-cpp - gguf-my-repo base_model_relation: merge --- # sleepdeprived3/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B-Q3_K_M-GGUF This model was converted to GGUF format from [`ReadyArt/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B`](https://huggingface.co/ReadyArt/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ReadyArt/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo sleepdeprived3/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B-Q3_K_M-GGUF --hf-file omega-darker-gaslight_the-final-forgotten-fever-dream-24b-q3_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo sleepdeprived3/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B-Q3_K_M-GGUF --hf-file omega-darker-gaslight_the-final-forgotten-fever-dream-24b-q3_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo sleepdeprived3/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B-Q3_K_M-GGUF --hf-file omega-darker-gaslight_the-final-forgotten-fever-dream-24b-q3_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo sleepdeprived3/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B-Q3_K_M-GGUF --hf-file omega-darker-gaslight_the-final-forgotten-fever-dream-24b-q3_k_m.gguf -c 2048 ```
mshrafi/mlmodel
mshrafi
2025-04-22T10:16:03Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-04-22T10:16:03Z
--- license: creativeml-openrail-m ---
Hartunka/tiny_bert_rand_50_v2_qnli
Hartunka
2025-04-22T10:14:05Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:Hartunka/tiny_bert_rand_50_v2", "base_model:finetune:Hartunka/tiny_bert_rand_50_v2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-22T10:08:06Z
--- library_name: transformers language: - en base_model: Hartunka/tiny_bert_rand_50_v2 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: tiny_bert_rand_50_v2_qnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE QNLI type: glue args: qnli metrics: - name: Accuracy type: accuracy value: 0.6124839831594362 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny_bert_rand_50_v2_qnli This model is a fine-tuned version of [Hartunka/tiny_bert_rand_50_v2](https://huggingface.co/Hartunka/tiny_bert_rand_50_v2) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6521 - Accuracy: 0.6125 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6655 | 1.0 | 410 | 0.6521 | 0.6125 | | 0.6359 | 2.0 | 820 | 0.6523 | 0.6189 | | 0.5935 | 3.0 | 1230 | 0.6676 | 0.6204 | | 0.5338 | 4.0 | 1640 | 0.7061 | 0.6215 | | 0.4667 | 5.0 | 2050 | 0.7881 | 0.6158 | | 0.3973 | 6.0 | 2460 | 0.9106 | 0.6110 | ### Framework versions - Transformers 4.50.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.21.1
kawausorin/kw_extract_gemma-3-1b-it-unsloth-ft-without-system
kawausorin
2025-04-22T10:12:56Z
0
0
transformers
[ "transformers", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-22T10:12:52Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PengZhang424242/whisper-tiny.en-ONNX
PengZhang424242
2025-04-22T10:12:51Z
0
0
transformers.js
[ "transformers.js", "onnx", "whisper", "automatic-speech-recognition", "base_model:openai/whisper-tiny.en", "base_model:quantized:openai/whisper-tiny.en", "region:us" ]
automatic-speech-recognition
2025-04-22T10:12:27Z
--- library_name: transformers.js base_model: - openai/whisper-tiny.en --- # whisper-tiny.en (ONNX) This is an ONNX version of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
mmmmin1/my_awesome_opus_books_model
mmmmin1
2025-04-22T10:08:44Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-04-22T09:18:46Z
--- library_name: transformers license: apache-2.0 base_model: t5-small tags: - generated_from_trainer metrics: - bleu model-index: - name: my_awesome_opus_books_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_opus_books_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6046 - Bleu: 6.2364 - Gen Len: 18.3447 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:| | 1.8609 | 1.0 | 6355 | 1.6288 | 6.0415 | 18.3456 | | 1.824 | 2.0 | 12710 | 1.6046 | 6.2364 | 18.3447 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
mlfoundations-dev/qos_boost_qos_bprod_Qwen2.5-7B-Instruct_openthoughts2_100k
mlfoundations-dev
2025-04-22T10:08:44Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T10:05:30Z
--- library_name: transformers license: other base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: qos_boost_qos_bprod_Qwen2.5-7B-Instruct_openthoughts2_100k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qos_boost_qos_bprod_Qwen2.5-7B-Instruct_openthoughts2_100k This model is a fine-tuned version of [/leonardo_work/EUHPC_E03_068/DCFT_shared/hub/models--Qwen--Qwen2.5-7B-Instruct/snapshots/a09a35458c702b33eeacc393d103063234e8bc28](https://huggingface.co//leonardo_work/EUHPC_E03_068/DCFT_shared/hub/models--Qwen--Qwen2.5-7B-Instruct/snapshots/a09a35458c702b33eeacc393d103063234e8bc28) on the mlfoundations-dev/openthoughts2_100k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 512 - total_train_batch_size: 512 - total_eval_batch_size: 4096 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.6.0+cu124 - Datasets 3.1.0 - Tokenizers 0.20.0
Hartunka/tiny_bert_rand_50_v2_cola
Hartunka
2025-04-22T10:07:02Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:Hartunka/tiny_bert_rand_50_v2", "base_model:finetune:Hartunka/tiny_bert_rand_50_v2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-22T10:06:13Z
--- library_name: transformers language: - en base_model: Hartunka/tiny_bert_rand_50_v2 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation - accuracy model-index: - name: tiny_bert_rand_50_v2_cola results: - task: name: Text Classification type: text-classification dataset: name: GLUE COLA type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.0 - name: Accuracy type: accuracy value: 0.6912751793861389 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny_bert_rand_50_v2_cola This model is a fine-tuned version of [Hartunka/tiny_bert_rand_50_v2](https://huggingface.co/Hartunka/tiny_bert_rand_50_v2) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6176 - Matthews Correlation: 0.0 - Accuracy: 0.6913 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:| | 0.6135 | 1.0 | 34 | 0.6176 | 0.0 | 0.6913 | | 0.6006 | 2.0 | 68 | 0.6197 | 0.0 | 0.6913 | | 0.5776 | 3.0 | 102 | 0.6242 | 0.0372 | 0.6903 | | 0.5383 | 4.0 | 136 | 0.6582 | 0.0622 | 0.6721 | | 0.4936 | 5.0 | 170 | 0.6671 | 0.0857 | 0.6491 | | 0.4569 | 6.0 | 204 | 0.7221 | 0.0817 | 0.6376 | ### Framework versions - Transformers 4.50.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.21.1
shibajustfor/9a699c4e-3ff3-4fcd-afcd-07ee086bb372
shibajustfor
2025-04-22T10:05:55Z
0
0
transformers
[ "transformers", "generated_from_trainer", "unsloth", "endpoints_compatible", "region:us" ]
null
2025-04-22T10:05:47Z
--- library_name: transformers model_name: shibajustfor/9a699c4e-3ff3-4fcd-afcd-07ee086bb372 tags: - generated_from_trainer - unsloth licence: license --- # Model Card for shibajustfor/9a699c4e-3ff3-4fcd-afcd-07ee086bb372 This model is a fine-tuned version of [None](https://huggingface.co/None). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ### Framework versions - TRL: 0.12.0 - Transformers: 4.46.3 - Pytorch: 2.5.1 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
rehabaam/ds-cxr-covid19
rehabaam
2025-04-22T10:02:27Z
0
0
keras
[ "keras", "CNN", "covid19", "Lung-Opacity", "Viral-Pneumonia", "en", "base_model:rehabaam/ds-cxr-covid19", "base_model:finetune:rehabaam/ds-cxr-covid19", "license:mit", "region:us" ]
null
2025-04-22T09:32:09Z
--- license: mit language: - en metrics: - accuracy - f1 base_model: - rehabaam/ds-cxr-covid19 tags: - CNN - covid19 - Lung-Opacity - Viral-Pneumonia --- Chest X-Ray Classification Model (🦠) 📋 Overview This project focuses on building and evaluating a Convolutional Neural Network (CNN) model for classifying chest X-ray images into four categories: - Normal - Pneumonia - Lung Opacity - COVID-19 The model was trained using masked chest X-ray images (lungs only) to enhance focus on medically relevant areas. ⸻ 🧠 Model Architecture The CNN model includes: - **Input size**: (256, 256, 1) RGB masked lung images - **Convolutional blocks**: Conv2D(32) → Conv2D(64) → Conv2D(128) → Conv2D(256) → Conv2D(512) - **ASPP Block**: Atrous Spatial Pyramid Pooling (ASPP) to capture multi-scale features. - **Attention Block**: Squeeze-and-Excitation (SE Block) applied after key stages. - **Pooling Layers**: Global Average Pooling 2D - **Custom Loss function**: Focuses more on hard examples and less on easy one. - **Classifier Head**: Dense → Softmax for multiclass classification (4 classes) Additional techniques used: - **Data Augmentation**: Random flipping, rotation (range from 0 to 10 degrees) - **Dropout**: Regularization to prevent overfitting (20%) - **EarlyStopping & ReduceLROnPlateau**: For efficient training 📊 Metrics Final evaluation results: | Metric | Score| |------------------|------| | Accuracy | ~93% | | Precision | ~92% | | Recall | ~92% | | F1-Score | ~92% | Note: - The dataset was balanced manually into training and validation datasets (80%/20%) - Grad-CAM visualization was used to verify model attention inside the lungs. - The model is still being improved for higher F1 scores. 🗃 Dataset - **Source**: https://www.kaggle.com/datasets/tawsifurrahman/covid19-radiography-database - **Masked lungs**: Masked lungs were generated using GAN model (maja011235/lung-segmentation-gan) 🚀 Future Work - Fine-tuning with different loss functions - Model ensembling - Clinical-grade evaluation with external datasets
adedayoakinade/dqn-SpaceInvadersNoFrameskip-v4
adedayoakinade
2025-04-22T09:56:59Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-04-22T09:56:21Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 785.50 +/- 216.20 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib SBX (SB3 + Jax): https://github.com/araffin/sbx Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga adedayoakinade -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga adedayoakinade -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga adedayoakinade ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
SVECTOR-OFFICIAL/gemma-3-finetune
SVECTOR-OFFICIAL
2025-04-22T09:56:32Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gemma-3-1b-it", "base_model:finetune:unsloth/gemma-3-1b-it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T09:47:15Z
--- base_model: unsloth/gemma-3-1b-it tags: - text-generation-inference - transformers - unsloth - gemma3_text license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** SVECTOR-OFFICIAL - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-1b-it This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
gokulsrinivasagan/tinybert_train_book_ent_15p_ra_stsb
gokulsrinivasagan
2025-04-22T09:55:38Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:gokulsrinivasagan/tinybert_train_book_ent_15p_ra", "base_model:finetune:gokulsrinivasagan/tinybert_train_book_ent_15p_ra", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-22T09:54:32Z
--- library_name: transformers language: - en license: apache-2.0 base_model: gokulsrinivasagan/tinybert_train_book_ent_15p_ra tags: - generated_from_trainer datasets: - glue metrics: - spearmanr model-index: - name: tinybert_train_book_ent_15p_ra_stsb results: - task: name: Text Classification type: text-classification dataset: name: GLUE STSB type: glue args: stsb metrics: - name: Spearmanr type: spearmanr value: 0.13907746289881937 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinybert_train_book_ent_15p_ra_stsb This model is a fine-tuned version of [gokulsrinivasagan/tinybert_train_book_ent_15p_ra](https://huggingface.co/gokulsrinivasagan/tinybert_train_book_ent_15p_ra) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 2.3765 - Pearson: 0.1563 - Spearmanr: 0.1391 - Combined Score: 0.1477 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:| | 2.8797 | 1.0 | 23 | 2.5186 | 0.1078 | 0.1070 | 0.1074 | | 2.0267 | 2.0 | 46 | 2.5221 | 0.1054 | 0.0966 | 0.1010 | | 1.9495 | 3.0 | 69 | 2.3765 | 0.1563 | 0.1391 | 0.1477 | | 1.8315 | 4.0 | 92 | 2.6262 | 0.1987 | 0.2044 | 0.2015 | | 1.647 | 5.0 | 115 | 2.5150 | 0.2023 | 0.2061 | 0.2042 | | 1.4464 | 6.0 | 138 | 2.4054 | 0.2370 | 0.2388 | 0.2379 | | 1.2721 | 7.0 | 161 | 2.4680 | 0.2402 | 0.2403 | 0.2403 | | 1.0755 | 8.0 | 184 | 3.0618 | 0.2150 | 0.2179 | 0.2165 | ### Framework versions - Transformers 4.51.2 - Pytorch 2.6.0+cu126 - Datasets 3.5.0 - Tokenizers 0.21.1
solo432/6445
solo432
2025-04-22T09:55:04Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-04-22T09:55:04Z
--- license: creativeml-openrail-m ---