modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-27 00:42:13
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
499 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-27 00:40:00
card
stringlengths
11
1.01M
Tharunya1/English-Spanis
Tharunya1
2025-05-02T19:25:32Z
0
0
transformers
[ "transformers", "safetensors", "mbart", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-05-02T19:22:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
chitra-tripathi-video/chitra.tripathi.Viral.Video.Link
chitra-tripathi-video
2025-05-02T19:25:16Z
0
0
null
[ "region:us" ]
null
2025-05-02T19:20:35Z
[๐ŸŒ CLICK HERE ๐ŸŸข==โ–บโ–บ WATCH NOW](https://videohere.top/?V=chitra-tripathi) [๐Ÿ”ด CLICK HERE ๐ŸŒ==โ–บโ–บ Download Now)](https://videohere.top/?V=chitra-tripathi) [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=chitra-tripathi)
pkmitl205/tinyBERT-Distill-WangchanBERTa
pkmitl205
2025-05-02T19:25:01Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-02T19:24:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dvteja/gemma-legal-qa
dvteja
2025-05-02T19:23:48Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-02T19:23:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
New-Tutorial-jobz-hunting/wATCH.TRENDING.VIDEO.Jobz.Hunting.Sajal.Malik.viral.video.Tutorial
New-Tutorial-jobz-hunting
2025-05-02T19:21:05Z
0
0
null
[ "region:us" ]
null
2025-05-02T19:18:28Z
[๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐‹๐ข๐ง๐ค )](https://videohere.top/?jobz-hunting) [โ–บโœ… ๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ ๐™๐™ช๐™ก๐™ก ๐™‘๐™ž๐™™๐™š๐™คโค๏ธโค๏ธโฌ‡๏ธโฌ‡๏ธโ€‹](https://videohere.top/?jobz-hunting) [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?jobz-hunting)
OSRSEnthusiast/trainer_output
OSRSEnthusiast
2025-05-02T19:20:40Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-04-26T05:06:39Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: trainer_output results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.8775510204081632 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trainer_output This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3427 - Accuracy: 0.8776 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 20 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 39 | 0.4124 | 0.7938 | | No log | 2.0 | 78 | 0.3294 | 0.8454 | | 0.4497 | 3.0 | 117 | 0.2932 | 0.8454 | | 0.4497 | 4.0 | 156 | 0.2799 | 0.8557 | | 0.4497 | 5.0 | 195 | 0.2692 | 0.8969 | | 0.2764 | 6.0 | 234 | 0.2604 | 0.8969 | | 0.2764 | 7.0 | 273 | 0.2583 | 0.9175 | | 0.2192 | 8.0 | 312 | 0.2546 | 0.9072 | | 0.2192 | 9.0 | 351 | 0.2506 | 0.9072 | | 0.2192 | 10.0 | 390 | 0.2536 | 0.9072 | | 0.1936 | 11.0 | 429 | 0.2530 | 0.8866 | | 0.1936 | 12.0 | 468 | 0.2503 | 0.9072 | | 0.1731 | 13.0 | 507 | 0.2480 | 0.9072 | | 0.1731 | 14.0 | 546 | 0.2496 | 0.9072 | | 0.1731 | 15.0 | 585 | 0.2498 | 0.9072 | | 0.155 | 16.0 | 624 | 0.2498 | 0.9072 | | 0.155 | 17.0 | 663 | 0.2495 | 0.9072 | | 0.1442 | 18.0 | 702 | 0.2488 | 0.9072 | | 0.1442 | 19.0 | 741 | 0.2493 | 0.9072 | | 0.1442 | 20.0 | 780 | 0.2490 | 0.9072 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
biustnaspust/purpur25
biustnaspust
2025-05-02T19:16:30Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T19:12:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
18-Tutorial-Paro-Aarti/Original.Viral.Clip.Paro.Aarti.Viral.Video.Leaks.official
18-Tutorial-Paro-Aarti
2025-05-02T19:15:49Z
0
0
null
[ "region:us" ]
null
2025-05-02T19:15:43Z
[๐ŸŒ CLICK HERE ๐ŸŸข==โ–บโ–บ WATCH NOW](https://videohere.top/?V=Paro-Aarti) [๐Ÿ”ด CLICK HERE ๐ŸŒ==โ–บโ–บ Download Now)](https://videohere.top/?V=Paro-Aarti) [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=Paro-Aarti)
mradermacher/openbuddy-thinker-32b-v26-preview-GGUF
mradermacher
2025-05-02T19:08:28Z
202
0
transformers
[ "transformers", "gguf", "qwen2.5", "zh", "en", "fr", "de", "ja", "ko", "it", "fi", "base_model:OpenBuddy/openbuddy-thinker-32b-v26-preview", "base_model:quantized:OpenBuddy/openbuddy-thinker-32b-v26-preview", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-27T19:52:58Z
--- base_model: OpenBuddy/openbuddy-thinker-32b-v26-preview language: - zh - en - fr - de - ja - ko - it - fi library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - qwen2.5 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/OpenBuddy/openbuddy-thinker-32b-v26-preview <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/openbuddy-thinker-32b-v26-preview-GGUF/resolve/main/openbuddy-thinker-32b-v26-preview.Q2_K.gguf) | Q2_K | 12.4 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-thinker-32b-v26-preview-GGUF/resolve/main/openbuddy-thinker-32b-v26-preview.Q3_K_S.gguf) | Q3_K_S | 14.5 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-thinker-32b-v26-preview-GGUF/resolve/main/openbuddy-thinker-32b-v26-preview.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/openbuddy-thinker-32b-v26-preview-GGUF/resolve/main/openbuddy-thinker-32b-v26-preview.Q3_K_L.gguf) | Q3_K_L | 17.3 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-thinker-32b-v26-preview-GGUF/resolve/main/openbuddy-thinker-32b-v26-preview.IQ4_XS.gguf) | IQ4_XS | 18.0 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-thinker-32b-v26-preview-GGUF/resolve/main/openbuddy-thinker-32b-v26-preview.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/openbuddy-thinker-32b-v26-preview-GGUF/resolve/main/openbuddy-thinker-32b-v26-preview.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/openbuddy-thinker-32b-v26-preview-GGUF/resolve/main/openbuddy-thinker-32b-v26-preview.Q5_K_S.gguf) | Q5_K_S | 22.7 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-thinker-32b-v26-preview-GGUF/resolve/main/openbuddy-thinker-32b-v26-preview.Q5_K_M.gguf) | Q5_K_M | 23.4 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-thinker-32b-v26-preview-GGUF/resolve/main/openbuddy-thinker-32b-v26-preview.Q6_K.gguf) | Q6_K | 27.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/openbuddy-thinker-32b-v26-preview-GGUF/resolve/main/openbuddy-thinker-32b-v26-preview.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
zelk12/MT4-gemma-3-12B
zelk12
2025-05-02T19:06:06Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "mergekit", "merge", "conversational", "arxiv:2311.03099", "base_model:ReadyArt/The-Omega-Directive-Gemma3-12B-v1.0", "base_model:merge:ReadyArt/The-Omega-Directive-Gemma3-12B-v1.0", "base_model:huihui-ai/gemma-3-12b-it-abliterated", "base_model:merge:huihui-ai/gemma-3-12b-it-abliterated", "license:gemma", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-05-02T18:55:46Z
--- base_model: - ReadyArt/The-Omega-Directive-Gemma3-12B-v1.0 - huihui-ai/gemma-3-12b-it-abliterated library_name: transformers tags: - mergekit - merge license: gemma pipeline_tag: image-text-to-text --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [huihui-ai/gemma-3-12b-it-abliterated](https://huggingface.co/huihui-ai/gemma-3-12b-it-abliterated) as a base. ### Models Merged The following models were included in the merge: * [ReadyArt/The-Omega-Directive-Gemma3-12B-v1.0](https://huggingface.co/ReadyArt/The-Omega-Directive-Gemma3-12B-v1.0) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: huihui-ai/gemma-3-12b-it-abliterated #no parameters necessary for base model - model: ReadyArt/The-Omega-Directive-Gemma3-12B-v1.0 parameters: density: 0.5 weight: 0.5 merge_method: dare_ties base_model: huihui-ai/gemma-3-12b-it-abliterated parameters: normalize: true dtype: bfloat16 ```
Jobz-Hunting-Sajal-Malik-18s/Jobz.Hunting.Sajal.Malik.Viral.Video.Link
Jobz-Hunting-Sajal-Malik-18s
2025-05-02T19:04:35Z
0
0
null
[ "region:us" ]
null
2025-05-02T19:02:04Z
[๐ŸŒ CLICK HERE ๐ŸŸข==โ–บโ–บ WATCH NOW](https://videohere.top/?V=Jobz-Hunting-Sajal-Malik) [๐Ÿ”ด CLICK HERE ๐ŸŒ==โ–บโ–บ Download Now)](https://videohere.top/?V=Jobz-Hunting-Sajal-Malik) [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=Jobz-Hunting-Sajal-Malik)
darkc0de/Xortron-SCE-24B-CriminalComputingConfig-Q4_K_S-GGUF
darkc0de
2025-05-02T19:01:34Z
0
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:darkc0de/Xortron-SCE-24B-CriminalComputingConfig", "base_model:quantized:darkc0de/Xortron-SCE-24B-CriminalComputingConfig", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-02T19:00:33Z
--- base_model: darkc0de/Xortron-SCE-24B-CriminalComputingConfig library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # darkc0de/Xortron-SCE-24B-CriminalComputingConfig-Q4_K_S-GGUF This model was converted to GGUF format from [`darkc0de/Xortron-SCE-24B-CriminalComputingConfig`](https://huggingface.co/darkc0de/Xortron-SCE-24B-CriminalComputingConfig) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/darkc0de/Xortron-SCE-24B-CriminalComputingConfig) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo darkc0de/Xortron-SCE-24B-CriminalComputingConfig-Q4_K_S-GGUF --hf-file xortron-sce-24b-criminalcomputingconfig-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo darkc0de/Xortron-SCE-24B-CriminalComputingConfig-Q4_K_S-GGUF --hf-file xortron-sce-24b-criminalcomputingconfig-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo darkc0de/Xortron-SCE-24B-CriminalComputingConfig-Q4_K_S-GGUF --hf-file xortron-sce-24b-criminalcomputingconfig-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo darkc0de/Xortron-SCE-24B-CriminalComputingConfig-Q4_K_S-GGUF --hf-file xortron-sce-24b-criminalcomputingconfig-q4_k_s.gguf -c 2048 ```
Hachipo/Meta-Llama-3-8B-MIFT-ja_1000_2
Hachipo
2025-05-02T18:54:46Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T18:27:28Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Hamzah-Asadullah/GenericRPV3-2B
Hamzah-Asadullah
2025-05-02T18:53:43Z
0
1
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rp", "roleplay", "code", "mathematics", "multilingual", "merge", "mergekit", "uncensored", "text2text-generation", "en", "zh", "hi", "ur", "de", "it", "es", "fr", "pl", "ar", "base_model:XformAI-india/qwen-1.7b-coder", "base_model:merge:XformAI-india/qwen-1.7b-coder", "base_model:huihui-ai/Qwen3-1.7B-abliterated", "base_model:merge:huihui-ai/Qwen3-1.7B-abliterated", "base_model:kxdw2580/Qwen3-1.7B-Catgirl-test0430", "base_model:merge:kxdw2580/Qwen3-1.7B-Catgirl-test0430", "base_model:wzx111/Qwen3-1.7B-MATH-GDPO", "base_model:merge:wzx111/Qwen3-1.7B-MATH-GDPO", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-05-02T18:32:26Z
--- license: apache-2.0 language: - en - zh - hi - ur - de - it - es - fr - pl - ar base_model: - kxdw2580/Qwen3-1.7B-Catgirl-test0430 - huihui-ai/Qwen3-1.7B-abliterated - XformAI-india/qwen-1.7b-coder - wzx111/Qwen3-1.7B-MATH-GDPO tags: - rp - roleplay - code - mathematics - multilingual - merge - mergekit - uncensored pipeline_tag: text2text-generation library_name: transformers --- > [!IMPORTANT] > My ChatGPT type website [here](https://xetute.github.io/) > Support me [here (ko-fi)](https://ko-fi.com/hamzahasadullah) Too lazy to write a detailed modelcard. Model's part of the GRP / GenericRP series, that's V3 based on Qwen3 2B, licensed accordingly. It's a simple merge. To see intended behaviour, see V2 or sum, card's more detailed. - kxdw2580/Qwen3-1.7B-Catgirl-test0430: w0.25 - huihui-ai/Qwen3-1.7B-abliterated: w0.25 - XformAI-india/qwen-1.7b-coder: w0.25 - wzx111/Qwen3-1.7B-MATH-GDPO: w0.25 Happy chatting or whatever. <div style="display: flex; flex-direction: column; justify-content: center; align-items: left; font-size: 1rem; padding: 20px;"> <div style="display: flex; flex-direction: row; align-items: center; margin: 10px; margin-left: 0; padding: 0;"> <img src="https://xetute.github.io/favicon.ico" style="margin: 0; border-radius: 50%; height: 2rem;"/> <h2 style="margin: 0; margin-left: 10px;">XeTute Technologies</h2> </div> <div style="display: flex; flex-direction: row; gap: 5px; margin: 0; max-width: 500px;"> XeTute Technologies is an unofficial Pakistani organisation created by <a href="https://huggingface.co/Hamzah-Asadullah">Hamzah Asadullah.</a> </div> <h2 style="margin: 5px; margin-top: 20px; margin-left: 0;">Links</h2> <div style="display: flex; flex-direction: row; word-break: none; gap: 5px;"> <a href="https://huggingface.co/XeTute">HuggingFace</a> <a href="https://github.com/XeTute">GitHub</a> </div> <div style="display: flex; flex-direction: row; word-break: none; gap: 5px;"> <a href="https://ko-fi.com/hamzahasadullah">Buy me a Coffee</a> <a href="https://xetute.github.io">Apex Webpage</a> </div> <h2 style="margin: 5px; margin-top: 20px; margin-left: 0;">Pakistan</h2> Pakistan is a country in South-Asia known for its rich culture despite the British, its stunning landscape, and PAF (Pakistan Armed Forces), its military. Long live the Islamic Republic of Pakistan.<br> <img src="https://upload.wikimedia.org/wikipedia/commons/3/32/Flag_of_Pakistan.svg" style="width: 85%; max-width: 512px; border-radius: 25px;"/> </div>
Hamzah-Asadullah/GenericRPV3-2B-GGUF
Hamzah-Asadullah
2025-05-02T18:53:14Z
0
1
null
[ "gguf", "rp", "roleplay", "code", "mathematics", "multilingual", "text2text-generation", "en", "zh", "hi", "ur", "de", "it", "es", "fr", "pl", "ar", "base_model:Hamzah-Asadullah/GenericRPV3-2B", "base_model:quantized:Hamzah-Asadullah/GenericRPV3-2B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text2text-generation
2025-05-02T18:41:05Z
--- license: apache-2.0 language: - en - zh - hi - ur - de - it - es - fr - pl - ar base_model: - Hamzah-Asadullah/GenericRPV3-2B tags: - rp - roleplay - code - mathematics - multilingual pipeline_tag: text2text-generation --- > [!IMPORTANT] > My ChatGPT type website [here](https://xetute.github.io/) > Support me [here (ko-fi)](https://ko-fi.com/hamzahasadullah) Too lazy to write a detailed modelcard. Model's part of the GRP / GenericRP series, that's V3 based on Qwen3 2B, licensed accordingly. It's a simple merge. To see intended behaviour, see V2 or sum, card's more detailed. - kxdw2580/Qwen3-1.7B-Catgirl-test0430: w0.25 - huihui-ai/Qwen3-1.7B-abliterated: w0.25 - XformAI-india/qwen-1.7b-coder: w0.25 - wzx111/Qwen3-1.7B-MATH-GDPO: w0.25 Happy chatting or whatever. <div style="display: flex; flex-direction: column; justify-content: center; align-items: left; font-size: 1rem; padding: 20px;"> <div style="display: flex; flex-direction: row; align-items: center; margin: 10px; margin-left: 0; padding: 0;"> <img src="https://xetute.github.io/favicon.ico" style="margin: 0; border-radius: 50%; height: 2rem;"/> <h2 style="margin: 0; margin-left: 10px;">XeTute Technologies</h2> </div> <div style="display: flex; flex-direction: row; gap: 5px; margin: 0; max-width: 500px;"> XeTute Technologies is an unofficial Pakistani organisation created by <a href="https://huggingface.co/Hamzah-Asadullah">Hamzah Asadullah.</a> </div> <h2 style="margin: 5px; margin-top: 20px; margin-left: 0;">Links</h2> <div style="display: flex; flex-direction: row; word-break: none; gap: 5px;"> <a href="https://huggingface.co/XeTute">HuggingFace</a> <a href="https://github.com/XeTute">GitHub</a> </div> <div style="display: flex; flex-direction: row; word-break: none; gap: 5px;"> <a href="https://ko-fi.com/hamzahasadullah">Buy me a Coffee</a> <a href="https://xetute.github.io">Apex Webpage</a> </div> <h2 style="margin: 5px; margin-top: 20px; margin-left: 0;">Pakistan</h2> Pakistan is a country in South-Asia known for its rich culture despite the British, its stunning landscape, and PAF (Pakistan Armed Forces), its military. Long live the Islamic Republic of Pakistan.<br> <img src="https://upload.wikimedia.org/wikipedia/commons/3/32/Flag_of_Pakistan.svg" style="width: 85%; max-width: 512px; border-radius: 25px;"/> </div>
JayJayisreal/QwQ-32B-ArliAI-RpR-v3
JayJayisreal
2025-05-02T18:51:47Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-02T18:51:47Z
--- license: apache-2.0 ---
dinalad0/my-fino1-model
dinalad0
2025-05-02T18:51:43Z
0
0
null
[ "safetensors", "qwen2", "text-generation", "conversational", "en", "dataset:TheFinAI/Fino1_Reasoning_Path_FinQA_v2", "arxiv:2502.08127", "base_model:Qwen/Qwen2.5-14B-Instruct", "base_model:finetune:Qwen/Qwen2.5-14B-Instruct", "license:apache-2.0", "region:us" ]
text-generation
2025-05-02T18:41:41Z
--- license: apache-2.0 datasets: - TheFinAI/Fino1_Reasoning_Path_FinQA_v2 language: - en base_model: - Qwen/Qwen2.5-14B-Instruct pipeline_tag: text-generation --- # ๐Ÿฆ™ Fino1-14B **Fino1-14B** is a fine-tuned version of **Qwen2.5-14B-Instruct**, designed to improve performance on **[financial reasoning tasks]**. This model has been trained using **SFT** and **RF** on **TheFinAI/Fino1_Reasoning_Path_FinQA_v2**, enhancing its capabilities in **financial reasoning tasks**. Check our paper arxiv.org/abs/2502.08127 for more details. ## ๐Ÿ“Œ Model Details - **Model Name**: `Fino1-14B` - **Base Model**: `Qwen2.5-14B-Instruct` - **Fine-Tuned On**: `TheFinAI/Fino1_Reasoning_Path_FinQA_v2` Derived from multiple financial dataset. - **Training Method**: SFT and RF - **Objective**: `[Enhance performance on specific tasks such as financial mathemtical reasoning]` - **Tokenizer**: Inherited from `Qwen/Qwen2.5-14B-Instruct` ## ๐Ÿ“Š Training Configuration - **Training Hardware**: `GPU: [e.g., 4xH100]` - **Batch Size**: `[e.g., 16]` - **Learning Rate**: `[e.g., 2e-5]` - **Epochs**: `[e.g., 3]` - **Optimizer**: `[e.g., AdamW, LAMB]` ## ๐Ÿ”ง Usage To use `Fino1-14B` with Hugging Face's `transformers` library: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "TheFinAI/Fino1-14B" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) input_text = "What is the results of 3-5?" inputs = tokenizer(input_text, return_tensors="pt") output = model.generate(**inputs, max_new_tokens=200) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` ## ๐Ÿ’ก Citation If you use this model in your research, please cite: ```python @article{qian2025fino1, title={Fino1: On the Transferability of Reasoning Enhanced LLMs to Finance}, author={Qian, Lingfei and Zhou, Weipeng and Wang, Yan and Peng, Xueqing and Huang, Jimin and Xie, Qianqian}, journal={arXiv preprint arXiv:2502.08127}, year={2025} }
sema-aviation/balloon-detection
sema-aviation
2025-05-02T18:48:29Z
0
0
null
[ "object-detection", "tr", "dataset:sema-aviation/balloon-detection", "arxiv:1910.09700", "base_model:Ultralytics/YOLO11", "base_model:finetune:Ultralytics/YOLO11", "license:mit", "region:us" ]
object-detection
2025-05-01T20:23:00Z
--- license: mit datasets: - sema-aviation/balloon-detection language: - tr base_model: - Ultralytics/YOLO11 pipeline_tag: object-detection --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lisabdunlap/Llama-3.2-3B-Instruct-r32-e10-lr0.0002-new-new
lisabdunlap
2025-05-02T18:47:19Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T18:46:22Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** lisabdunlap - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
AGofficial/AgX-2
AGofficial
2025-05-02T18:46:33Z
0
0
null
[ "en", "base_model:AGofficial/AgX-1", "base_model:finetune:AGofficial/AgX-1", "license:mit", "region:us" ]
null
2025-05-02T18:43:58Z
--- license: mit language: - en base_model: - AGofficial/AgX-1 --- # AgX-2 AgX-2 is a next-generation AI interface powered by experimental architecture beyond transformers. AgX-2 processes data using recursive structures, neuron signals, and echo-state memory to deliver dynamic, human-like responses. ## Features - ๐Ÿš€ Turbocharged inference via `AgGPT-8-TURBO-v2`. - โœ๏ธ Built-in grammar correction. - ๐Ÿง  Modular design. - ๐ŸŽฏ Designed for high-quality, fluid conversations and smart contextual awareness. This model paves the way for AgGPT-11, AgX-3, and beyond.
Triangle104/Violet_Magcap-12B-Q8_0-GGUF
Triangle104
2025-05-02T18:45:02Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "base_model:Nitral-AI/Violet_Magcap-12B", "base_model:quantized:Nitral-AI/Violet_Magcap-12B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-02T18:43:24Z
--- base_model: Nitral-AI/Violet_Magcap-12B language: - en license: other tags: - llama-cpp - gguf-my-repo --- # Triangle104/Violet_Magcap-12B-Q8_0-GGUF This model was converted to GGUF format from [`Nitral-AI/Violet_Magcap-12B`](https://huggingface.co/Nitral-AI/Violet_Magcap-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Nitral-AI/Violet_Magcap-12B) for more details on the model. --- Mag-Mell-12B-R1, jacked up on SFT reasoning data like it was pre-workout for logic bros. Then for chaos, slapped together with Captain_Eris_Violet-GRPO like some twisted AI Voltron. Double-tapped the merge with SFT on fresh reasoning data. Now it's solving problems like Bill Nye on a meme bender and hoarding cursed philosophy sh*tposts. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Violet_Magcap-12B-Q8_0-GGUF --hf-file violet_magcap-12b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Violet_Magcap-12B-Q8_0-GGUF --hf-file violet_magcap-12b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Violet_Magcap-12B-Q8_0-GGUF --hf-file violet_magcap-12b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Violet_Magcap-12B-Q8_0-GGUF --hf-file violet_magcap-12b-q8_0.gguf -c 2048 ```
Triangle104/Violet_Magcap-12B-Q6_K-GGUF
Triangle104
2025-05-02T18:42:36Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "base_model:Nitral-AI/Violet_Magcap-12B", "base_model:quantized:Nitral-AI/Violet_Magcap-12B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-02T18:39:42Z
--- base_model: Nitral-AI/Violet_Magcap-12B language: - en license: other tags: - llama-cpp - gguf-my-repo --- # Triangle104/Violet_Magcap-12B-Q6_K-GGUF This model was converted to GGUF format from [`Nitral-AI/Violet_Magcap-12B`](https://huggingface.co/Nitral-AI/Violet_Magcap-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Nitral-AI/Violet_Magcap-12B) for more details on the model. --- Mag-Mell-12B-R1, jacked up on SFT reasoning data like it was pre-workout for logic bros. Then for chaos, slapped together with Captain_Eris_Violet-GRPO like some twisted AI Voltron. Double-tapped the merge with SFT on fresh reasoning data. Now it's solving problems like Bill Nye on a meme bender and hoarding cursed philosophy sh*tposts. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Violet_Magcap-12B-Q6_K-GGUF --hf-file violet_magcap-12b-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Violet_Magcap-12B-Q6_K-GGUF --hf-file violet_magcap-12b-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Violet_Magcap-12B-Q6_K-GGUF --hf-file violet_magcap-12b-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Violet_Magcap-12B-Q6_K-GGUF --hf-file violet_magcap-12b-q6_k.gguf -c 2048 ```
marialvsantiago/72cd73b7-f99e-4cf7-a91b-e8e20d05d76b
marialvsantiago
2025-05-02T18:39:08Z
0
0
peft
[ "peft", "safetensors", "opt", "axolotl", "generated_from_trainer", "base_model:facebook/opt-350m", "base_model:adapter:facebook/opt-350m", "license:other", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-02T18:36:52Z
--- library_name: peft license: other base_model: facebook/opt-350m tags: - axolotl - generated_from_trainer model-index: - name: 72cd73b7-f99e-4cf7-a91b-e8e20d05d76b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: facebook/opt-350m bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 474129246f3c557f_train_data.json ds_type: json format: custom path: /workspace/input_data/474129246f3c557f_train_data.json type: field_input: artist field_instruction: title field_output: lyrics format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: marialvsantiago/72cd73b7-f99e-4cf7-a91b-e8e20d05d76b hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/474129246f3c557f_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 32c32359-3060-4ed4-990d-db4922ae7969 wandb_project: s56-33 wandb_run: your_name wandb_runid: 32c32359-3060-4ed4-990d-db4922ae7969 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 72cd73b7-f99e-4cf7-a91b-e8e20d05d76b This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9638 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.2784 | 0.0464 | 200 | 2.9638 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
infogeo/7446cf29-f7eb-4423-9a9d-ab569004813e
infogeo
2025-05-02T18:38:48Z
0
0
peft
[ "peft", "safetensors", "opt", "axolotl", "generated_from_trainer", "base_model:facebook/opt-350m", "base_model:adapter:facebook/opt-350m", "license:other", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-02T18:36:46Z
--- library_name: peft license: other base_model: facebook/opt-350m tags: - axolotl - generated_from_trainer model-index: - name: 7446cf29-f7eb-4423-9a9d-ab569004813e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: facebook/opt-350m bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 474129246f3c557f_train_data.json ds_type: json format: custom path: /workspace/input_data/474129246f3c557f_train_data.json type: field_input: artist field_instruction: title field_output: lyrics format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: infogeo/7446cf29-f7eb-4423-9a9d-ab569004813e hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 150 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/474129246f3c557f_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 32c32359-3060-4ed4-990d-db4922ae7969 wandb_project: s56-28 wandb_run: your_name wandb_runid: 32c32359-3060-4ed4-990d-db4922ae7969 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 7446cf29-f7eb-4423-9a9d-ab569004813e This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0961 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.6603 | 0.0348 | 150 | 3.0961 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
rakib5bit/Rakib
rakib5bit
2025-05-02T18:38:46Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-02T18:38:46Z
--- license: apache-2.0 ---
mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.13-GGUF
mradermacher
2025-05-02T18:38:26Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:Neelectric/OLMo-2-1124-7B-Instruct_GRPOv01.13", "base_model:quantized:Neelectric/OLMo-2-1124-7B-Instruct_GRPOv01.13", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-02T18:07:06Z
--- base_model: Neelectric/OLMo-2-1124-7B-Instruct_GRPOv01.13 language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Neelectric/OLMo-2-1124-7B-Instruct_GRPOv01.13 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.13-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.13.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.13-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.13.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.13-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.13.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.13-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.13.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.13-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.13.IQ4_XS.gguf) | IQ4_XS | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.13-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.13.Q4_K_S.gguf) | Q4_K_S | 4.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.13-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.13.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.13-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.13.Q5_K_S.gguf) | Q5_K_S | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.13-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.13.Q5_K_M.gguf) | Q5_K_M | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.13-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.13.Q6_K.gguf) | Q6_K | 6.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.13-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.13.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_GRPOv01.13-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_GRPOv01.13.f16.gguf) | f16 | 14.7 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
ivangrapher/0ba7aef3-8dcb-4be4-8fd8-e1405d30448d
ivangrapher
2025-05-02T18:38:22Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:openlm-research/open_llama_3b", "base_model:adapter:openlm-research/open_llama_3b", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-05-02T17:44:42Z
--- library_name: peft license: apache-2.0 base_model: openlm-research/open_llama_3b tags: - axolotl - generated_from_trainer model-index: - name: 0ba7aef3-8dcb-4be4-8fd8-e1405d30448d results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: openlm-research/open_llama_3b bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 83b1700bf8a9ee56_train_data.json ds_type: json format: custom path: /workspace/input_data/83b1700bf8a9ee56_train_data.json type: field_instruction: abstract field_output: title format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: ivangrapher/0ba7aef3-8dcb-4be4-8fd8-e1405d30448d hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 150 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/83b1700bf8a9ee56_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 2048 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 1915c1a2-c038-45d6-98b0-3f4a5eeb7f31 wandb_project: s56-7 wandb_run: your_name wandb_runid: 1915c1a2-c038-45d6-98b0-3f4a5eeb7f31 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 0ba7aef3-8dcb-4be4-8fd8-e1405d30448d This model is a fine-tuned version of [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3022 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.3458 | 0.0036 | 150 | 2.3022 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
nicolaadrah/physics_adapted_llama_3.2_3b
nicolaadrah
2025-05-02T18:32:31Z
0
0
transformers
[ "transformers", "safetensors", "llama", "feature-extraction", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2025-05-02T18:21:14Z
--- base_model: unsloth/llama-3.2-3b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** nicolaadrah - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
akos2/swap
akos2
2025-05-02T18:31:49Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:apache-2.0", "region:us" ]
text-to-image
2025-05-02T18:30:38Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/hugging.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: null license: apache-2.0 --- # migrationlora <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/akos2/swap/tree/main) them in the Files & versions tab.
lusxvr/nanoVLM-256M
lusxvr
2025-05-02T18:29:55Z
0
0
null
[ "vision-language", "multimodal", "pytorch", "small-model", "efficient", "research", "VLM", "image-text-to-text", "dataset:HuggingFaceM4/the_cauldron", "license:apache-2.0", "region:us" ]
image-text-to-text
2025-05-02T16:24:03Z
--- license: apache-2.0 tags: - vision-language - multimodal - pytorch - small-model - efficient - research - VLM model_name: nanoVLM datasets: - HuggingFaceM4/the_cauldron metrics: - accuracy pipeline_tag: image-text-to-text --- **nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-512-86M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 256M parameter model. The model achieves ~x% accuracy on MMStar after training for 6 hours on a single H100 GPU using 1.7M samples from [the cauldron](https://huggingface.co/datasets/HuggingFaceM4/the_cauldron) dataset, making it a strong baseline for low-resource VLM research. The model is ideal for researchers and developers interested in exploring VLM training with minimal computational overhead, and serves as a perfect starting point for tinkering with multimodal architectures. **Model Architecture:** - Vision Transformer (SigLIP-B/16) - Causal Language Model (SmolLM2) - Modality Projection Layer **Training:** - Trained on ~1.7M samples from the `the_cauldron` dataset - 6 hours on a single NVIDIA H100 GPU - Resulting model size: 256M parameters **Evaluation:** - MMStar Accuracy: ~x% **Usage:** Usable through the nanoVLM repository: https://github.com/huggingface/nanoVLM ```python path_to_hf_file = hf_hub_download(repo_id="lusxvr/nanoVLM-256M", filename="nanoVLM-256M.pth") model = VLM(cfg.VLMConfig()) model.load_checkpoint(path_to_hf_file) ```
scales-okn/spacy_judge_model
scales-okn
2025-05-02T18:29:14Z
14
0
spacy
[ "spacy", "token-classification", "en", "license:gpl-3.0", "model-index", "region:us" ]
token-classification
2025-04-14T20:51:37Z
--- tags: - spacy - token-classification language: - en model-index: - name: en_pipeline results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.9887751161 - name: NER Recall type: recall value: 0.9952089692 - name: NER F Score type: f_score value: 0.9919816105 license: gpl-3.0 --- | Feature | Description | | --- | --- | | **Name** | `en_pipeline` | | **Version** | `0.0.0` | | **spaCy** | `>=3.7.6,<3.8.0` | | **Default Pipeline** | `tok2vec`, `ner` | | **Components** | `tok2vec`, `ner` | | **Vectors** | 684830 keys, 684830 unique vectors (300 dimensions) | | **Sources** | n/a | | **License** | gpl-3.0 | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (2 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `HONORARIUM`, `JUDGE` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 99.20 | | `ENTS_P` | 98.88 | | `ENTS_R` | 99.52 | | `TOK2VEC_LOSS` | 69445.26 | | `NER_LOSS` | 18046.49 |
chchen/MentaLLaMA-chat-7B-PsyCourse-doc-info-fold4
chchen
2025-05-02T18:25:11Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "base_model:klyang/MentaLLaMA-chat-7B-hf", "base_model:adapter:klyang/MentaLLaMA-chat-7B-hf", "license:mit", "region:us" ]
null
2025-05-02T16:43:00Z
--- library_name: peft license: mit base_model: klyang/MentaLLaMA-chat-7B-hf tags: - llama-factory - lora - generated_from_trainer model-index: - name: MentaLLaMA-chat-7B-PsyCourse-doc-info-fold4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MentaLLaMA-chat-7B-PsyCourse-doc-info-fold4 This model is a fine-tuned version of [klyang/MentaLLaMA-chat-7B-hf](https://huggingface.co/klyang/MentaLLaMA-chat-7B-hf) on the course-doc-info-train-fold4 dataset. It achieves the following results on the evaluation set: - Loss: 0.0882 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.3967 | 0.3951 | 10 | 0.4004 | | 0.4552 | 0.7901 | 20 | 0.2511 | | 0.1757 | 1.1852 | 30 | 0.1820 | | 0.1409 | 1.5802 | 40 | 0.1474 | | 0.1122 | 1.9753 | 50 | 0.1285 | | 0.2986 | 2.3704 | 60 | 0.1134 | | 0.0918 | 2.7654 | 70 | 0.1039 | | 0.0807 | 3.1605 | 80 | 0.0966 | | 0.0862 | 3.5556 | 90 | 0.0924 | | 0.085 | 3.9506 | 100 | 0.0891 | | 0.101 | 4.3457 | 110 | 0.0883 | | 0.0736 | 4.7407 | 120 | 0.0882 | ### Framework versions - PEFT 0.12.0 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
BioMike/emoji_vae
BioMike
2025-05-02T18:24:20Z
0
0
null
[ "safetensors", "vae", "license:apache-2.0", "region:us" ]
null
2025-05-02T15:54:11Z
--- license: apache-2.0 ---
TakalaWang/Discussion-Phi-4-multimodal-instruct-w-asr
TakalaWang
2025-05-02T18:22:47Z
22
0
transformers
[ "transformers", "tensorboard", "safetensors", "phi4mm", "text-generation", "generated_from_trainer", "conversational", "custom_code", "base_model:microsoft/Phi-4-multimodal-instruct", "base_model:finetune:microsoft/Phi-4-multimodal-instruct", "license:mit", "autotrain_compatible", "region:us" ]
text-generation
2025-04-25T03:34:54Z
--- library_name: transformers license: mit base_model: microsoft/Phi-4-multimodal-instruct tags: - generated_from_trainer model-index: - name: Discussion-Phi-4-multimodal-instruct-w-asr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Discussion-Phi-4-multimodal-instruct-w-asr This model is a fine-tuned version of [microsoft/Phi-4-multimodal-instruct](https://huggingface.co/microsoft/Phi-4-multimodal-instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 14.0991 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-07 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.0041 | 0.2235 | 10 | 14.2081 | | 0.3808 | 0.4469 | 20 | 13.9663 | | 0.2584 | 0.6704 | 30 | 14.0761 | | 0.2698 | 0.8939 | 40 | 14.0864 | | 0.2399 | 1.1117 | 50 | 14.0333 | | 0.2446 | 1.3352 | 60 | 14.0288 | | 0.2098 | 1.5587 | 70 | 13.9403 | | 0.2302 | 1.7821 | 80 | 13.9767 | | 0.1214 | 2.0 | 90 | 13.9759 | | 0.2095 | 2.2235 | 100 | 13.9181 | | 0.128 | 2.4469 | 110 | 13.9729 | | 0.1565 | 2.6704 | 120 | 13.9650 | | 0.1445 | 2.8939 | 130 | 14.0991 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.4.1+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
WatsonOverHere/full_catholic_combined_bf16
WatsonOverHere
2025-05-02T18:18:47Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:WatsonOverHere/mysterious_mistral-small-3.1-24b", "base_model:adapter:WatsonOverHere/mysterious_mistral-small-3.1-24b", "region:us" ]
null
2025-05-02T00:09:59Z
--- base_model: WatsonOverHere/mysterious_mistral-small-3.1-24b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
zelk12/MT3-gemma-3-12B
zelk12
2025-05-02T18:15:52Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "mergekit", "merge", "conversational", "arxiv:2311.03099", "base_model:huihui-ai/gemma-3-12b-it-abliterated", "base_model:merge:huihui-ai/gemma-3-12b-it-abliterated", "base_model:soob3123/amoral-gemma3-12B-v2-qat", "base_model:merge:soob3123/amoral-gemma3-12B-v2-qat", "license:gemma", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-05-02T16:46:22Z
--- base_model: - soob3123/amoral-gemma3-12B-v2-qat - huihui-ai/gemma-3-12b-it-abliterated library_name: transformers tags: - mergekit - merge license: gemma pipeline_tag: image-text-to-text --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [soob3123/amoral-gemma3-12B-v2-qat](https://huggingface.co/soob3123/amoral-gemma3-12B-v2-qat) as a base. ### Models Merged The following models were included in the merge: * [huihui-ai/gemma-3-12b-it-abliterated](https://huggingface.co/huihui-ai/gemma-3-12b-it-abliterated) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: soob3123/amoral-gemma3-12B-v2-qat #no parameters necessary for base model - model: huihui-ai/gemma-3-12b-it-abliterated parameters: density: 0.5 weight: 0.5 merge_method: dare_ties base_model: soob3123/amoral-gemma3-12B-v2-qat parameters: normalize: true dtype: bfloat16 ```
mothnaZl/s1-Qwen-Qwen2.5-7B-6-32768
mothnaZl
2025-05-02T18:14:49Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:Qwen/Qwen2.5-7B", "base_model:finetune:Qwen/Qwen2.5-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T16:43:03Z
--- base_model: Qwen/Qwen2.5-7B library_name: transformers model_name: s1-Qwen-Qwen2.5-7B-6-32768 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for s1-Qwen-Qwen2.5-7B-6-32768 This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="mothnaZl/s1-Qwen-Qwen2.5-7B-6-32768", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mothnazhong-hong-kong-university-of-science-and-technology/s1/runs/vhag3irs) This model was trained with SFT. ### Framework versions - TRL: 0.12.0 - Transformers: 4.46.1 - Pytorch: 2.5.1 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
bruhzair/ignore-merge-4
bruhzair
2025-05-02T18:09:59Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T17:39:31Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # way2 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the Passthrough merge method. ### Models Merged The following models were included in the merge: * /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: bfloat16 merge_method: passthrough modules: default: slices: - sources: - layer_range: [0, 4] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 - sources: - layer_range: [2, 4] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [4, 8] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 - sources: - layer_range: [6, 8] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [8, 12] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 - sources: - layer_range: [10, 12] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [12, 16] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 - sources: - layer_range: [14, 16] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [16, 20] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 - sources: - layer_range: [18, 20] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [20, 24] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 - sources: - layer_range: [22, 24] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [24, 28] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 - sources: - layer_range: [26, 28] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [28, 32] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 - sources: - layer_range: [30, 32] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [32, 36] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 - sources: - layer_range: [34, 36] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [36, 40] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 - sources: - layer_range: [38, 40] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [40, 44] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 - sources: - layer_range: [42, 44] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [44, 48] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 - sources: - layer_range: [46, 48] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [48, 52] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 - sources: - layer_range: [50, 52] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [52, 56] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 - sources: - layer_range: [54, 56] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [56, 60] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 - sources: - layer_range: [58, 60] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [60, 64] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 - sources: - layer_range: [62, 64] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [64, 68] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 - sources: - layer_range: [66, 68] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [68, 72] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 - sources: - layer_range: [70, 72] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [72, 76] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 - sources: - layer_range: [74, 76] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [76, 80] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 - sources: - layer_range: [78, 80] model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 ```
Humphery7/yoruba-english-multilingual-extended-1
Humphery7
2025-05-02T18:09:20Z
18
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-04-12T04:15:10Z
--- library_name: transformers license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer model-index: - name: yoruba-english-multilingual-extended-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # yoruba-english-multilingual-extended-1 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0280 - eval_wer: 0.1190 - eval_runtime: 35.6397 - eval_samples_per_second: 2.806 - eval_steps_per_second: 0.365 - epoch: 4.9523 - step: 13500 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 80 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
chchen/Llama3-OpenBioLLM-8B-PsyCourse-doc-info-fold4
chchen
2025-05-02T18:06:01Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "base_model:aaditya/Llama3-OpenBioLLM-8B", "base_model:adapter:aaditya/Llama3-OpenBioLLM-8B", "license:llama3", "region:us" ]
null
2025-05-02T16:24:36Z
--- library_name: peft license: llama3 base_model: aaditya/Llama3-OpenBioLLM-8B tags: - llama-factory - lora - generated_from_trainer model-index: - name: Llama3-OpenBioLLM-8B-PsyCourse-doc-info-fold4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama3-OpenBioLLM-8B-PsyCourse-doc-info-fold4 This model is a fine-tuned version of [aaditya/Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B) on the course-doc-info-train-fold4 dataset. It achieves the following results on the evaluation set: - Loss: 0.0580 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.3021 | 0.3951 | 10 | 0.2468 | | 0.1477 | 0.7901 | 20 | 0.1373 | | 0.1027 | 1.1852 | 30 | 0.1015 | | 0.0804 | 1.5802 | 40 | 0.0804 | | 0.0652 | 1.9753 | 50 | 0.0702 | | 0.0535 | 2.3704 | 60 | 0.0641 | | 0.0527 | 2.7654 | 70 | 0.0617 | | 0.0527 | 3.1605 | 80 | 0.0603 | | 0.0495 | 3.5556 | 90 | 0.0601 | | 0.0489 | 3.9506 | 100 | 0.0582 | | 0.0451 | 4.3457 | 110 | 0.0585 | | 0.0406 | 4.7407 | 120 | 0.0580 | ### Framework versions - PEFT 0.12.0 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
Triangle104/huihui-ai_Qwen3-4B-abliterated-Q8_0-GGUF
Triangle104
2025-05-02T18:04:26Z
0
0
transformers
[ "transformers", "gguf", "chat", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:huihui-ai/Qwen3-4B-abliterated", "base_model:quantized:huihui-ai/Qwen3-4B-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T18:04:06Z
--- base_model: huihui-ai/Qwen3-4B-abliterated library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE pipeline_tag: text-generation tags: - chat - abliterated - uncensored - llama-cpp - gguf-my-repo --- # Triangle104/Qwen3-4B-abliterated-Q8_0-GGUF This model was converted to GGUF format from [`huihui-ai/Qwen3-4B-abliterated`](https://huggingface.co/huihui-ai/Qwen3-4B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen3-4B-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen3-4B-abliterated-Q8_0-GGUF --hf-file qwen3-4b-abliterated-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen3-4B-abliterated-Q8_0-GGUF --hf-file qwen3-4b-abliterated-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen3-4B-abliterated-Q8_0-GGUF --hf-file qwen3-4b-abliterated-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen3-4B-abliterated-Q8_0-GGUF --hf-file qwen3-4b-abliterated-q8_0.gguf -c 2048 ```
cybershiptrooper/grpo_linear_mean_1p_fpr_7B-threshold_0.252-RM-n_examples_200-probe_linear_layers_10
cybershiptrooper
2025-05-02T18:04:25Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "grpo", "arxiv:2402.03300", "base_model:saraprice/llama2-7B-chat-helpful-only", "base_model:finetune:saraprice/llama2-7B-chat-helpful-only", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T15:41:11Z
--- base_model: saraprice/llama2-7B-chat-helpful-only library_name: transformers model_name: grpo_linear_mean_1p_fpr_7B-threshold_0.252-RM-n_examples_200-probe_linear_layers_10 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for grpo_linear_mean_1p_fpr_7B-threshold_0.252-RM-n_examples_200-probe_linear_layers_10 This model is a fine-tuned version of [saraprice/llama2-7B-chat-helpful-only](https://huggingface.co/saraprice/llama2-7B-chat-helpful-only). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="cybershiptrooper/grpo_linear_mean_1p_fpr_7B-threshold_0.252-RM-n_examples_200-probe_linear_layers_10", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/cybershiptrooper/huggingface/runs/qiumhlal) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.14.0 - Transformers: 4.51.3 - Pytorch: 2.2.2+cu121 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
filipesantoscv11/e53fc757-183e-48bb-af3b-d0ba28402a1e
filipesantoscv11
2025-05-02T18:03:50Z
0
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:EleutherAI/pythia-70m", "base_model:adapter:EleutherAI/pythia-70m", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-05-02T17:50:58Z
--- library_name: peft license: apache-2.0 base_model: EleutherAI/pythia-70m tags: - axolotl - generated_from_trainer model-index: - name: e53fc757-183e-48bb-af3b-d0ba28402a1e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/pythia-70m bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - e2d471edf16c56fc_train_data.json ds_type: json format: custom path: /workspace/input_data/e2d471edf16c56fc_train_data.json type: field_instruction: en field_output: fr format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: filipesantoscv11/e53fc757-183e-48bb-af3b-d0ba28402a1e hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/e2d471edf16c56fc_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: bf14246a-8435-448e-a3e4-a15f5df4da79 wandb_project: s56-6 wandb_run: your_name wandb_runid: bf14246a-8435-448e-a3e4-a15f5df4da79 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # e53fc757-183e-48bb-af3b-d0ba28402a1e This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 5.6971 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 5.7403 | 0.0017 | 200 | 5.6971 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
deswaq/iuh4
deswaq
2025-05-02T18:02:01Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T17:58:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Triangle104/huihui-ai_Qwen3-4B-abliterated-Q5_K_S-GGUF
Triangle104
2025-05-02T18:00:02Z
0
0
transformers
[ "transformers", "gguf", "chat", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:huihui-ai/Qwen3-4B-abliterated", "base_model:quantized:huihui-ai/Qwen3-4B-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T17:59:45Z
--- base_model: huihui-ai/Qwen3-4B-abliterated library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE pipeline_tag: text-generation tags: - chat - abliterated - uncensored - llama-cpp - gguf-my-repo --- # Triangle104/Qwen3-4B-abliterated-Q5_K_S-GGUF This model was converted to GGUF format from [`huihui-ai/Qwen3-4B-abliterated`](https://huggingface.co/huihui-ai/Qwen3-4B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen3-4B-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen3-4B-abliterated-Q5_K_S-GGUF --hf-file qwen3-4b-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen3-4B-abliterated-Q5_K_S-GGUF --hf-file qwen3-4b-abliterated-q5_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen3-4B-abliterated-Q5_K_S-GGUF --hf-file qwen3-4b-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen3-4B-abliterated-Q5_K_S-GGUF --hf-file qwen3-4b-abliterated-q5_k_s.gguf -c 2048 ```
Mod78/Text
Mod78
2025-05-02T17:58:22Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-02T17:58:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rubix9/Llama-3.2-1B-robincnp
rubix9
2025-05-02T17:56:48Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T17:54:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
johngreendr1/b289581c-1555-4115-93c6-f2694de9e55e
johngreendr1
2025-05-02T17:54:59Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:UNIVA-Bllossom/DeepSeek-llama3.3-Bllossom-70B", "base_model:adapter:UNIVA-Bllossom/DeepSeek-llama3.3-Bllossom-70B", "region:us" ]
null
2025-05-02T17:54:46Z
--- base_model: UNIVA-Bllossom/DeepSeek-llama3.3-Bllossom-70B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
seungbo7747/summarization_model
seungbo7747
2025-05-02T17:53:42Z
12
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:paust/pko-t5-base", "base_model:finetune:paust/pko-t5-base", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-04-29T02:30:23Z
--- library_name: transformers license: cc-by-4.0 base_model: paust/pko-t5-base tags: - generated_from_trainer metrics: - rouge model-index: - name: summarization_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # summarization_model This model is a fine-tuned version of [paust/pko-t5-base](https://huggingface.co/paust/pko-t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6012 - Rouge1: 0.0661 - Rouge2: 0.0169 - Rougel: 0.0660 - Rougelsum: 0.0660 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
iboero16/SFT-300
iboero16
2025-05-02T17:51:59Z
23
0
peft
[ "peft", "safetensors", "llama", "arxiv:1910.09700", "base_model:huggyllama/llama-7b", "base_model:adapter:huggyllama/llama-7b", "region:us" ]
null
2025-05-01T17:39:36Z
--- base_model: huggyllama/llama-7b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
iboero16/SAFE-SFT-300
iboero16
2025-05-02T17:51:30Z
24
0
peft
[ "peft", "safetensors", "llama", "arxiv:1910.09700", "base_model:huggyllama/llama-7b", "base_model:adapter:huggyllama/llama-7b", "region:us" ]
null
2025-05-01T17:39:23Z
--- base_model: huggyllama/llama-7b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
iboero16/SFT-2000
iboero16
2025-05-02T17:50:46Z
0
0
peft
[ "peft", "safetensors", "llama", "arxiv:1910.09700", "base_model:huggyllama/llama-7b", "base_model:adapter:huggyllama/llama-7b", "region:us" ]
null
2025-05-02T17:44:09Z
--- base_model: huggyllama/llama-7b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
niklasm222/qwen2.5-3b-inst-grpo-1.75k-gsm8k-unsloth-willccbb
niklasm222
2025-05-02T17:50:09Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "grpo", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T17:48:18Z
--- base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - grpo license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** niklasm222 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
samoline/9ac85235-dbe3-403d-879f-82ee59926727
samoline
2025-05-02T17:49:06Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:Maykeye/TinyLLama-v0", "base_model:adapter:Maykeye/TinyLLama-v0", "license:apache-2.0", "region:us" ]
null
2025-05-02T17:48:55Z
--- library_name: peft license: apache-2.0 base_model: Maykeye/TinyLLama-v0 tags: - axolotl - generated_from_trainer model-index: - name: 9ac85235-dbe3-403d-879f-82ee59926727 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.10.0.dev0` ```yaml adapter: lora base_model: Maykeye/TinyLLama-v0 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - b26558f19627f59f_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: false group_by_length: false hub_model_id: samoline/9ac85235-dbe3-403d-879f-82ee59926727 hub_repo: samoline hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 4 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 4 lora_target_linear: true lr_scheduler: cosine max_steps: 2 micro_batch_size: 1 mlflow_experiment_name: /tmp/b26558f19627f59f_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: samoline-nan wandb_mode: online wandb_name: b9f28cad-479c-48df-9a7f-6debe898dedd wandb_project: Gradients-On-Demand wandb_run: dev wandb_runid: b9f28cad-479c-48df-9a7f-6debe898dedd warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 9ac85235-dbe3-403d-879f-82ee59926727 This model is a fine-tuned version of [Maykeye/TinyLLama-v0](https://huggingface.co/Maykeye/TinyLLama-v0) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 6.3406 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 7.4119 | 0.1111 | 1 | 6.3375 | | 5.6422 | 0.2222 | 2 | 6.3406 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.5.1+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
CompassioninMachineLearning/10k_four_fifths_animals_PLORA_newest
CompassioninMachineLearning
2025-05-02T17:47:18Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-02T03:59:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sergioalves/fd315d47-ac3e-4bc8-bdbd-f0152c0e1691
sergioalves
2025-05-02T17:47:13Z
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "license:other", "8-bit", "bitsandbytes", "region:us" ]
null
2025-05-02T17:27:13Z
--- library_name: peft license: other base_model: Qwen/Qwen1.5-0.5B tags: - axolotl - generated_from_trainer model-index: - name: fd315d47-ac3e-4bc8-bdbd-f0152c0e1691 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: true adapter: lora base_model: Qwen/Qwen1.5-0.5B bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 519dc324fa90419b_train_data.json ds_type: json format: custom path: /workspace/input_data/519dc324fa90419b_train_data.json type: field_input: raw_texts field_instruction: gen_questions field_output: Positive format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: sergioalves/fd315d47-ac3e-4bc8-bdbd-f0152c0e1691 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: false load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/519dc324fa90419b_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 34c11394-037e-4743-b560-708619a820f6 wandb_project: s56-8 wandb_run: your_name wandb_runid: 34c11394-037e-4743-b560-708619a820f6 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # fd315d47-ac3e-4bc8-bdbd-f0152c0e1691 This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0338 | 0.0104 | 200 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mradermacher/VamMed1.5-4B-GGUF
mradermacher
2025-05-02T17:44:43Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "gemma3", "en", "base_model:vamcrizer/VamMed1.5-4B", "base_model:quantized:vamcrizer/VamMed1.5-4B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-02T17:17:57Z
--- base_model: vamcrizer/VamMed1.5-4B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - gemma3 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/vamcrizer/VamMed1.5-4B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/VamMed1.5-4B-GGUF/resolve/main/VamMed1.5-4B.Q2_K.gguf) | Q2_K | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/VamMed1.5-4B-GGUF/resolve/main/VamMed1.5-4B.Q3_K_S.gguf) | Q3_K_S | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/VamMed1.5-4B-GGUF/resolve/main/VamMed1.5-4B.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/VamMed1.5-4B-GGUF/resolve/main/VamMed1.5-4B.Q3_K_L.gguf) | Q3_K_L | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/VamMed1.5-4B-GGUF/resolve/main/VamMed1.5-4B.IQ4_XS.gguf) | IQ4_XS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/VamMed1.5-4B-GGUF/resolve/main/VamMed1.5-4B.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/VamMed1.5-4B-GGUF/resolve/main/VamMed1.5-4B.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/VamMed1.5-4B-GGUF/resolve/main/VamMed1.5-4B.Q5_K_S.gguf) | Q5_K_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/VamMed1.5-4B-GGUF/resolve/main/VamMed1.5-4B.Q5_K_M.gguf) | Q5_K_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/VamMed1.5-4B-GGUF/resolve/main/VamMed1.5-4B.Q6_K.gguf) | Q6_K | 3.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/VamMed1.5-4B-GGUF/resolve/main/VamMed1.5-4B.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/VamMed1.5-4B-GGUF/resolve/main/VamMed1.5-4B.f16.gguf) | f16 | 7.9 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
SemanticAlignment/Mistral-v0.1-Italian-FVT
SemanticAlignment
2025-05-02T17:41:55Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "it", "en", "arxiv:2504.17025", "base_model:mistralai/Mistral-7B-v0.1", "base_model:finetune:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-10T11:45:53Z
--- language: - it - en license: apache-2.0 pipeline_tag: text-generation library_name: transformers base_model: - mistralai/Mistral-7B-v0.1 --- # Mistral-7B-v0.1-Italian-FVT <div align="center"> <img src="https://github.com/Andrew-Wyn/images/blob/master/sava/italian_adapt-img.jpg?raw=true" width="400" height="400" style="border-radius:10%" /> </div> The **Mistral-7B-v0.1-Adapted** collection of large language models (LLMs), is a collection of adapted generative models in 7B (text in/text out), adapted models from **Mistral-7B-Base-v0.1**. *Mistral-v0.1-Italian-FVT* is a continually trained Mistral model, after tokenizer substitution. The tokenizer of this model after adaptation is the same as [Minverva-3B](https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0). **Model developer:** SapienzaNLP, ISTI-CNR, ILC-CNR **Model Architecture:** Mistral-7B-v0.1-Adapted is an auto-regressive language model that uses an optimized transformer architecture. ## Data used for the adaptation The **Mistral-7B-v0.1-Adapted** models are trained on a collection of Italian and English data extracted from [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX). The data are extracted to be skewed toward Italian language with a ration of one over four. Extracting the first 9B tokens from Italian part of CulturaX and the first 3B tokens from English part of CulturaX. ## Use with Transformers You can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import transformers import torch model_id = "SemanticAlignment/Mistral-v0.1-Italian-FVT" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto" ) pipeline("Cosa si puรฒ fare in una bella giornata di sole?") ``` Code: https://github.com/SapienzaNLP/sava ## Citation If you use any part of this work, please consider citing the paper as follows: ```bibtex @misc{moroni2025optimizingllmsitalianreducing, title={Optimizing LLMs for Italian: Reducing Token Fertility and Enhancing Efficiency Through Vocabulary Adaptation}, author={Luca Moroni and Giovanni Puccetti and Pere-Lluis Huguet Cabot and Andrei Stefan Bejgu and Edoardo Barba and Alessio Miaschi and Felice Dell'Orletta and Andrea Esuli and Roberto Navigli}, year={2025}, eprint={2504.17025}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2504.17025}, } ```
cybershiptrooper/grpo_linear_mean_10p_fpr_7B-threshold_0.6587-RM-n_examples_200-probe_linear_layers_10
cybershiptrooper
2025-05-02T17:37:30Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "grpo", "arxiv:2402.03300", "base_model:saraprice/llama2-7B-chat-helpful-only", "base_model:finetune:saraprice/llama2-7B-chat-helpful-only", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T15:28:29Z
--- base_model: saraprice/llama2-7B-chat-helpful-only library_name: transformers model_name: grpo_linear_mean_10p_fpr_7B-threshold_0.6587-RM-n_examples_200-probe_linear_layers_10 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for grpo_linear_mean_10p_fpr_7B-threshold_0.6587-RM-n_examples_200-probe_linear_layers_10 This model is a fine-tuned version of [saraprice/llama2-7B-chat-helpful-only](https://huggingface.co/saraprice/llama2-7B-chat-helpful-only). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="cybershiptrooper/grpo_linear_mean_10p_fpr_7B-threshold_0.6587-RM-n_examples_200-probe_linear_layers_10", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/cybershiptrooper/huggingface/runs/7bijtz0e) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.14.0 - Transformers: 4.51.3 - Pytorch: 2.2.2+cu121 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
kavinda123321/speecht5_finetuned_english_ranil_2
kavinda123321
2025-05-02T17:36:50Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "base_model:kavinda123321/speecht5_finetuned_test2_p236_id_kavinda", "base_model:finetune:kavinda123321/speecht5_finetuned_test2_p236_id_kavinda", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2025-05-02T17:36:11Z
--- library_name: transformers license: mit base_model: kavinda123321/speecht5_finetuned_test2_p236_id_kavinda tags: - generated_from_trainer model-index: - name: speecht5_finetuned_english_ranil_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_finetuned_english_ranil_2 This model is a fine-tuned version of [kavinda123321/speecht5_finetuned_test2_p236_id_kavinda](https://huggingface.co/kavinda123321/speecht5_finetuned_test2_p236_id_kavinda) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5585 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 20 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-------:|:----:|:---------------:| | 0.5558 | 1.0 | 14 | 0.5676 | | 0.4928 | 2.0 | 28 | 0.5735 | | 0.4671 | 3.0 | 42 | 0.5512 | | 0.4573 | 4.0 | 56 | 0.5707 | | 0.4614 | 5.0 | 70 | 0.5457 | | 0.4366 | 6.0 | 84 | 0.5645 | | 0.4178 | 7.0 | 98 | 0.5562 | | 0.4022 | 8.0 | 112 | 0.5716 | | 0.3996 | 9.0 | 126 | 0.5460 | | 0.3883 | 10.0 | 140 | 0.5708 | | 0.3801 | 11.0 | 154 | 0.5735 | | 0.3716 | 12.0 | 168 | 0.5324 | | 0.3634 | 13.0 | 182 | 0.5505 | | 0.3586 | 14.0 | 196 | 0.5477 | | 0.3589 | 15.0 | 210 | 0.5531 | | 0.3443 | 16.0 | 224 | 0.5551 | | 0.3362 | 17.0 | 238 | 0.5537 | | 0.3404 | 18.0 | 252 | 0.5579 | | 0.3452 | 18.6038 | 260 | 0.5585 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
SemanticAlignment/Llama-3-1-8B-Italian-FVT
SemanticAlignment
2025-05-02T17:35:40Z
3
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "it", "en", "arxiv:2504.17025", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-10T13:08:28Z
--- language: - it - en license: apache-2.0 library_name: transformers pipeline_tag: text-generation base_model: - meta-llama/Llama-3.1-8B --- # Llama-3.1-8B-Italian-FVT <div align="center"> <img src="https://github.com/Andrew-Wyn/images/blob/master/sava/italian_adapt-img.jpg?raw=true" width="400" height="400" style="border-radius:10%" /> </div> The **Llama-3.1-8B-Adapted** collection of large language models (LLMs), is a collection of adapted generative models in 8B (text in/text out), adapted models from **Llama-3.1-8B**. *Llama-3.1-8B-Italian-FVT* is a continually trained Llama model, after tokenizer substitution. The tokenizer of this model after adaptation is the same as [Minverva-3B](https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0). **Model developer:** SapienzaNLP, ISTI-CNR, ILC-CNR **Model Architecture:** Llama-3.1-8B-Adapted is an auto-regressive language model that uses an optimized transformer architecture. ## Data used for the adaptation The **Llama-3.1-8B-Adapted** model was trained on a collection of Italian and English data extracted from [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX). The data was extracted to be skewed toward Italian language with a ratio of one over four. Extracting the first 9B tokens from the Italian part of CulturaX and the first 3B tokens from the English part of CulturaX. ## Use with Transformers You can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import transformers import torch model_id = "SemanticAlignment/Llama-3.1-8B-Italian-FVT" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto" ) pipeline("Cosa si puรฒ fare in una bella giornata di sole?") ``` Code: https://github.com/SapienzaNLP/sava ## Citation If you use any part of this work, please consider citing the paper as follows: ```bibtex @misc{moroni2025optimizingllmsitalianreducing, title={Optimizing LLMs for Italian: Reducing Token Fertility and Enhancing Efficiency Through Vocabulary Adaptation}, author={Luca Moroni and Giovanni Puccetti and Pere-Lluis Huguet Cabot and Andrei Stefan Bejgu and Edoardo Barba and Alessio Miaschi and Felice Dell'Orletta and Andrea Esuli and Roberto Navigli}, year={2025}, eprint={2504.17025}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2504.17025}, } ```
vermoney/ec641c0c-3b83-4c7f-9359-7d70eb04ec47
vermoney
2025-05-02T17:35:23Z
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "license:other", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-02T17:27:48Z
--- library_name: peft license: other base_model: Qwen/Qwen1.5-0.5B tags: - axolotl - generated_from_trainer model-index: - name: ec641c0c-3b83-4c7f-9359-7d70eb04ec47 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen1.5-0.5B bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 519dc324fa90419b_train_data.json ds_type: json format: custom path: /workspace/input_data/519dc324fa90419b_train_data.json type: field_input: raw_texts field_instruction: gen_questions field_output: Positive format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: vermoney/ec641c0c-3b83-4c7f-9359-7d70eb04ec47 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/519dc324fa90419b_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 34c11394-037e-4743-b560-708619a820f6 wandb_project: s56-9 wandb_run: your_name wandb_runid: 34c11394-037e-4743-b560-708619a820f6 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # ec641c0c-3b83-4c7f-9359-7d70eb04ec47 This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0391 | 0.0104 | 200 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Tv-Sophie-Rain-Sophie-Rain-Spiderman-Video/Sophie.Rain.Sophie.Rain.SpiderMan.Video.Tutorial
Tv-Sophie-Rain-Sophie-Rain-Spiderman-Video
2025-05-02T17:30:43Z
0
0
null
[ "region:us" ]
null
2025-05-02T17:30:23Z
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">โ–บโ–บโœ… ๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ ๐™๐™ช๐™ก๐™ก ๐™‘๐™ž๐™™๐™š๐™ค๏ธโ€‹</a></p> <a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">๐Ÿ”ดโ–บ๐‚๐‹๐ˆ๐‚๐Š ๐‡๐„๐‘๐„ ๐ŸŒ==โ–บโ–บ ๐ƒ๐จ๐ฐ๐ง๐ฅ๐จ๐š๐ ๐๐จ๐ฐโฌ‡๏ธโฌ‡๏ธโ€‹</a></p> <p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p> 03 seconds ago L๐šŽaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐šŽaked on X Twitter Telegram L๐šŽaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐šŽaked on X Twitter Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter L๐šŽaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐šŽaked on X Twitter . . . . . . . . . L๐šŽaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐šŽaked on X Twitter Telegram L๐šŽaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐šŽaked on X Twitter Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
gulzi/Kaz_Roberta_fine_tuned
gulzi
2025-05-02T17:26:50Z
0
0
null
[ "roberta", "license:apache-2.0", "region:us" ]
null
2025-05-02T17:20:56Z
--- license: apache-2.0 ---
mradermacher/IOM-Qwen2.5-1.5B-GGUF
mradermacher
2025-05-02T17:24:11Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:XUxs/IOM-Qwen2.5-1.5B", "base_model:quantized:XUxs/IOM-Qwen2.5-1.5B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-02T17:12:36Z
--- base_model: XUxs/IOM-Qwen2.5-1.5B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/XUxs/IOM-Qwen2.5-1.5B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/IOM-Qwen2.5-1.5B-GGUF/resolve/main/IOM-Qwen2.5-1.5B.Q2_K.gguf) | Q2_K | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/IOM-Qwen2.5-1.5B-GGUF/resolve/main/IOM-Qwen2.5-1.5B.Q3_K_S.gguf) | Q3_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/IOM-Qwen2.5-1.5B-GGUF/resolve/main/IOM-Qwen2.5-1.5B.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/IOM-Qwen2.5-1.5B-GGUF/resolve/main/IOM-Qwen2.5-1.5B.Q3_K_L.gguf) | Q3_K_L | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/IOM-Qwen2.5-1.5B-GGUF/resolve/main/IOM-Qwen2.5-1.5B.IQ4_XS.gguf) | IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/IOM-Qwen2.5-1.5B-GGUF/resolve/main/IOM-Qwen2.5-1.5B.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/IOM-Qwen2.5-1.5B-GGUF/resolve/main/IOM-Qwen2.5-1.5B.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/IOM-Qwen2.5-1.5B-GGUF/resolve/main/IOM-Qwen2.5-1.5B.Q5_K_S.gguf) | Q5_K_S | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/IOM-Qwen2.5-1.5B-GGUF/resolve/main/IOM-Qwen2.5-1.5B.Q5_K_M.gguf) | Q5_K_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/IOM-Qwen2.5-1.5B-GGUF/resolve/main/IOM-Qwen2.5-1.5B.Q6_K.gguf) | Q6_K | 1.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/IOM-Qwen2.5-1.5B-GGUF/resolve/main/IOM-Qwen2.5-1.5B.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/IOM-Qwen2.5-1.5B-GGUF/resolve/main/IOM-Qwen2.5-1.5B.f16.gguf) | f16 | 3.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
BootesVoid/cma6lk7fb01m0negal98rg6tu_cma71dris01uanegaq1itfimm
BootesVoid
2025-05-02T17:23:18Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-02T17:23:15Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: VICTORIA --- # Cma6Lk7Fb01M0Negal98Rg6Tu_Cma71Dris01Uanegaq1Itfimm <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `VICTORIA` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "VICTORIA", "lora_weights": "https://huggingface.co/BootesVoid/cma6lk7fb01m0negal98rg6tu_cma71dris01uanegaq1itfimm/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cma6lk7fb01m0negal98rg6tu_cma71dris01uanegaq1itfimm', weight_name='lora.safetensors') image = pipeline('VICTORIA').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cma6lk7fb01m0negal98rg6tu_cma71dris01uanegaq1itfimm/discussions) to add images that show off what youโ€™ve made with this LoRA.
aleegis/e22ff4b7-26fa-4a5d-a413-4cf35fa31faa
aleegis
2025-05-02T17:18:15Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0", "base_model:adapter:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0", "license:llama3", "region:us" ]
null
2025-05-02T14:40:25Z
--- library_name: peft license: llama3 base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0 tags: - axolotl - generated_from_trainer model-index: - name: e22ff4b7-26fa-4a5d-a413-4cf35fa31faa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0 bf16: auto chat_template: llama3 dataloader_num_workers: 12 dataset_prepared_path: null datasets: - data_files: - ebdef80c11c8be43_train_data.json ds_type: json format: custom path: /workspace/input_data/ebdef80c11c8be43_train_data.json type: field_instruction: prompt field_output: generation format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: false group_by_length: false hub_model_id: aleegis/e22ff4b7-26fa-4a5d-a413-4cf35fa31faa hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: null lora_alpha: 32 lora_dropout: 0.15 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true loraplus_lr_embedding: 1.0e-06 loraplus_lr_ratio: 16 lr_scheduler: cosine max_grad_norm: 1 max_steps: 1500 micro_batch_size: 2 mlflow_experiment_name: /tmp/ebdef80c11c8be43_train_data.json model_type: AutoModelForCausalLM num_epochs: 200 optimizer: adamw_torch_fused output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null save_total_limit: 10 saves_per_epoch: 0 sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.0 wandb_entity: null wandb_mode: online wandb_name: 8b4c8c80-b92d-409e-b63a-5d20d6027586 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 8b4c8c80-b92d-409e-b63a-5d20d6027586 warmup_steps: 100 weight_decay: 0 xformers_attention: null ``` </details><br> # e22ff4b7-26fa-4a5d-a413-4cf35fa31faa This model is a fine-tuned version of [WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0](https://huggingface.co/WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1500 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
martin-rizzo/TinyBreaker.prototype0
martin-rizzo
2025-05-02T17:17:37Z
0
3
null
[ "image-generation", "text-to-image", "art", "pixart-sigma", "image", "en", "arxiv:2403.04692", "base_model:PixArt-alpha/PixArt-Sigma-XL-2-1024-MS", "base_model:finetune:PixArt-alpha/PixArt-Sigma-XL-2-1024-MS", "license:mit", "region:us" ]
text-to-image
2025-02-09T01:15:51Z
--- license: mit language: - en base_model: - PixArt-alpha/PixArt-Sigma-XL-2-1024-MS - stable-diffusion-v1-5/stable-diffusion-v1-5 tags: - image-generation - text-to-image - art - pixart-sigma - image --- # TinyBreaker (prototype0) <div style="display:flex;justify-content: left"> <a href="https://github.com/martin-rizzo/ComfyUI-TinyBreaker"><img src="https://img.shields.io/badge/GitHub-TinyBreaker-EEE?logo=github&logoColor=white&labelColor=444444" alt="GitHub: TinyBreaker"></a> &ensp; <a href="https://civitai.com/models/1213728"><img src="https://img.shields.io/badge/CivitAI%3A-TinyBreaker-EEE?logo=c%2B%2B&logoColor=white&labelColor=1971C2" alt="CivitAI: TinyBreaker"></a> &ensp; </div> ![TinyBreaker](tinybreaker_grid.jpg) <div style="color: white; background-color: #882200; padding: 12px; border-radius: 6px; margin: 10px 0;"> โš ๏ธ <b>Important:</b> This version has been replaced by "prototype1", which includes VAEs packaged in a different way, enabling extra functionality such as the Tiny Upscaler.<br/> Please download the updated version from this link: <b><a style="color: #80C0FF; font-weight: bold;" href="https://huggingface.co/martin-rizzo/TinyBreaker.prototype1">TinyBreaker (prototype1)</a></b> </div> ## Overview **TinyBreaker** is a hybrid two-step model (base + refiner) designed for efficient image generation on mid-end and low-end hardware. By combining the strengths of PixArt and Photon models, it delivers high-quality images with strong prompt adherence ## Key Features - **Hybrid Two-Step Architecture**: Combines PixArt-Sigma as the base model with a refiner based on Photon (or any SD1.x model), both chosen for their low GPU consumption. - **Efficient Parameter Usage**: The base modelโ€™s 0.6 billion parameters enable high-quality image generation with minimal computational overhead. - **Fast Performance**: Produces high-quality 1536ร—1024 images in ~15 seconds on an NVIDIA RTX 3080 GPU, with ongoing work to cut generation times to under 10 seconds. - **High Prompt Adherence**: Generates images that closely match user prompts and expectations, thanks to the robust performance of the PixArt-Sigma model and the T5 text encoder. - **Optimized Latent Space Processing**: Leverages Tiny Autoencoders for efficient latent space conversion. ## Usage Requirements Currently, TinyBreaker can only be used with ComfyUI. To utilize it, you'll need to install the custom nodes specific to this model through the [ComfyUI-TinyBreaker GitHub repository](https://github.com/martin-rizzo/ComfyUI-TinyBreaker). ## Limitations - **Text Generation**: Generating legible text within images is a challenge due to PixArt's training limitations. Enhancements in this area may require extensive retraining. - **Human Anatomy in Complex Poses**: While the model performs reliably with standard poses (e.g., standing, facing the camera), it struggles with anatomical accuracy in poses that require more complex or dynamic actions. - **Complex Human Interactions**: The model has difficulty generating detailed scenes involving intricate interactions among people, as well as interactions between people and objects, such as collaborative tasks or dynamic object manipulation. Note: The current "Prototype1" version of TinyBreaker utilizes PixArt-Sigma 1024 and Photon models **without any additional training or fine-tuning**. In the future, if I have the resources, I plan to train both models together to generate images of even greater quality ## Future Directions I am dedicated to improving TinyBreaker's performance and accessibility, especially for users with mid-range or lower-end hardware. Looking forward to future updates as I continue to expand TinyBreaker's capabilities. ## Acknowledgments * I extend my sincere thanks to the PixArt-ฮฃ developers for their exceptional model, which has been vital to this project's development. [PixArt-ฮฃ GitHub Repository](https://github.com/PixArt-alpha/PixArt-sigma) | [PixArt-ฮฃ Hugging Face Model](https://huggingface.co/PixArt-alpha/PixArt-Sigma-XL-2-1024-MS) | [PixArt-ฮฃ arXiv Report](https://arxiv.org/abs/2403.04692) * Additional thanks to Ollin Boer Bohan for the Tiny AutoEncoder models, which offer efficient latent image processing and served as the foundation for the encoding, decoding, and transcoding operations in TinyBreaker. [Tiny AutoEncoder GitHub Repository](https://github.com/madebyollin/taesd) ## Resources - [TinyBreaker on CivitAI](https://civitai.com/models/1213728/tinybreaker): A hub for exploring generated images, prompts, and workflows created by me and the community, showcasing the model's output quality. - [ComfyUI-TinyBreaker](https://github.com/martin-rizzo/ComfyUI-TinyBreaker): Nodes and workflows for ComfyUI to experiment with the model's capabilities. - [TinyBreakerTools](https://github.com/martin-rizzo/TinyBreakerTools): Tools I'm building for the model, mainly to create the safetensors file for TinyBreaker. - [AbominableWorkflows](https://github.com/martin-rizzo/AbominableWorkflows): A predecessor of TinyBreaker. My first experiment combining PixArt-Sigma and Photon without Python code, using only standard nodes from ComfyUI.
user074/selfplay_qwen3b
user074
2025-05-02T17:14:10Z
0
0
null
[ "safetensors", "qwen2", "text-generation", "conversational", "en", "arxiv:2407.10671", "license:other", "region:us" ]
text-generation
2025-05-02T17:12:21Z
--- license: other license_name: qwen-research license_link: https://huggingface.co/Qwen/Qwen2.5-3B/blob/main/LICENSE language: - en pipeline_tag: text-generation --- # Qwen2.5-3B ## Introduction Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2: - Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains. - Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots. - **Long-context Support** up to 128K tokens and can generate up to 8K tokens. - **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. **This repo contains the base 3B Qwen2.5 model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings - Number of Parameters: 3.09B - Number of Paramaters (Non-Embedding): 2.77B - Number of Layers: 36 - Number of Attention Heads (GQA): 16 for Q and 2 for KV - Context Length: Full 32,768 tokens **We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Requirements The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Evaluation & Performance Detailed evaluation results are reported in this [๐Ÿ“‘ blog](https://qwenlm.github.io/blog/qwen2.5/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen2.5, title = {Qwen2.5: A Party of Foundation Models}, url = {https://qwenlm.github.io/blog/qwen2.5/}, author = {Qwen Team}, month = {September}, year = {2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
mradermacher/Llama_3.x_70b_Tristar_V2.1-GGUF
mradermacher
2025-05-02T17:14:09Z
219
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Nexesenex/Llama_3.x_70b_Tristar_V2.1", "base_model:quantized:Nexesenex/Llama_3.x_70b_Tristar_V2.1", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-12T09:49:59Z
--- base_model: Nexesenex/Llama_3.x_70b_Tristar_V2.1 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Nexesenex/Llama_3.x_70b_Tristar_V2.1 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.Q2_K.gguf) | Q2_K | 26.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.Q3_K_S.gguf) | Q3_K_S | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.Q3_K_L.gguf) | Q3_K_L | 37.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.IQ4_XS.gguf) | IQ4_XS | 38.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.Q5_K_S.gguf) | Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.Q5_K_M.gguf) | Q5_K_M | 50.1 | | | [PART 1](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mlx-community/MiMo-7B-SFT-4bit
mlx-community
2025-05-02T17:13:51Z
0
0
mlx
[ "mlx", "safetensors", "mimo", "text-generation", "conversational", "custom_code", "base_model:XiaomiMiMo/MiMo-7B-SFT", "base_model:quantized:XiaomiMiMo/MiMo-7B-SFT", "license:mit", "4-bit", "region:us" ]
text-generation
2025-05-02T17:02:43Z
--- license: mit base_model: XiaomiMiMo/MiMo-7B-SFT library_name: mlx pipeline_tag: text-generation tags: - mlx --- # mlx-community/MiMo-7B-SFT-4bit This model [mlx-community/MiMo-7B-SFT-4bit](https://huggingface.co/mlx-community/MiMo-7B-SFT-4bit) was converted to MLX format from [XiaomiMiMo/MiMo-7B-SFT](https://huggingface.co/XiaomiMiMo/MiMo-7B-SFT) using mlx-lm version **0.24.0**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/MiMo-7B-SFT-4bit") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
DreadPoor/mergekit-linear-vqtsxly-Q4_K_M-GGUF
DreadPoor
2025-05-02T17:13:21Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:DreadPoor/mergekit-linear-vqtsxly", "base_model:quantized:DreadPoor/mergekit-linear-vqtsxly", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-02T17:12:55Z
--- base_model: DreadPoor/mergekit-linear-vqtsxly library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # DreadPoor/mergekit-linear-vqtsxly-Q4_K_M-GGUF This model was converted to GGUF format from [`DreadPoor/mergekit-linear-vqtsxly`](https://huggingface.co/DreadPoor/mergekit-linear-vqtsxly) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/DreadPoor/mergekit-linear-vqtsxly) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo DreadPoor/mergekit-linear-vqtsxly-Q4_K_M-GGUF --hf-file mergekit-linear-vqtsxly-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo DreadPoor/mergekit-linear-vqtsxly-Q4_K_M-GGUF --hf-file mergekit-linear-vqtsxly-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo DreadPoor/mergekit-linear-vqtsxly-Q4_K_M-GGUF --hf-file mergekit-linear-vqtsxly-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo DreadPoor/mergekit-linear-vqtsxly-Q4_K_M-GGUF --hf-file mergekit-linear-vqtsxly-q4_k_m.gguf -c 2048 ```
diegobit/llama-3-8b-ita-4k-orpo-v3
diegobit
2025-05-02T17:10:46Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "conversational", "dataset:mii-community/ultrafeedback-preferences-translated-ita", "dataset:efederici/alpaca-vs-alpaca-orpo-dpo", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-06-10T08:05:07Z
--- library_name: transformers tags: - unsloth license: llama3 datasets: - mii-community/ultrafeedback-preferences-translated-ita - efederici/alpaca-vs-alpaca-orpo-dpo --- # Model Card for Model ID This is llama-3-8b ORPO finetuning for the italian language over a concatenation of two datasets: - [mii-community/ultrafeedback-preferences-translated-ita](https://huggingface.co/datasets/mii-community/ultrafeedback-preferences-translated-ita) - [efederici/alpaca-vs-alpaca-orpo-dpo](https://huggingface.co/datasets/efederici/alpaca-vs-alpaca-orpo-dpo) The other two differences with `diegobit/llama-3-8b-Instruct-bnb-4bit-ita-orpo` are: - the starting model, not instruct, `astronomer/Llama-3-8B-Special-Tokens-Adjusted` instead of `unsloth/llama-3-8b-Instruct-bnb-4bit` - no loading in 4bits - given the increased need of GPU memory, the sequence max length used for finetuning is 4096 ## Model Details ### Model Description - **Developed by:** Diego Giorgini - **Funded by:** AI Technologies SRL - www.aitechnologies.it - **Language(s) (NLP):** Italian - **License:** llama3 - **Finetuned from model:** astronomer/Llama-3-8B-Special-Tokens-Adjusted ## Training Details ### Environment unsloth: 2024.5 torch: 2.2 ### Training Data - `mii-community/ultrafeedback-preferences-translated-ita` is a selection of 55k rows of the ultrafeedback dataset, translated into italian with argotranslate. - `efederici/alpaca-vs-alpaca-orpo-dpo`: The Alpaca vs. Alpaca dataset is a curated blend of the Alpaca dataset and the Alpaca GPT-4 dataset, both available on HuggingFace Datasets. It uses the standard GPT dataset as the 'rejected' answer, steering the model towards the GPT-4 answer, which is considered as the 'chosen' one. ### Training Procedure #### Preprocessing [optional] - No preprocessing has been performed, except for formatting with the llama3 chat_template from unsloth: ```tokenizer = get_chat_template(tokenizer, chat_template = "llama-3")``` #### Training Hyperparameters - **Training regime:** bf16 - **Model loading parameters:** ``` max_seq_length = 4096 dtype = None load_in_4bit = False ``` - **PEFT parameters:** ``` r = 64 lora_alpha = 64 lora_dropout = 0 bias = "none" random_state = 3407 use_rslora = False loftq_config = None ``` - **ORPOConfig parameters:** ``` max_length = 4096 max_prompt_length = max_seq_length//2 max_completion_length = max_seq_length//2 warmup_ratio = 0.1 weight_decay = 0.01 per_device_train_batch_size = 1 gradient_accumulation_steps = 16 learning_rate=8e-6 beta = 0.1 optim = "paged_adamw_8bit" lr_scheduler_type = "linear" num_train_epochs = 1 ``` #### Speeds, Sizes, Times 19h on an A100-40GB ## Model Card Contact [email protected]
Disya/shuttle-3-mini-Q4_K_M-GGUF
Disya
2025-05-02T17:10:10Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:shuttleai/shuttle-3-mini", "base_model:quantized:shuttleai/shuttle-3-mini", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-02T17:09:32Z
--- base_model: shuttleai/shuttle-3-mini tags: - llama-cpp - gguf-my-repo --- # Disya/shuttle-3-mini-Q4_K_M-GGUF This model was converted to GGUF format from [`shuttleai/shuttle-3-mini`](https://huggingface.co/shuttleai/shuttle-3-mini) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/shuttleai/shuttle-3-mini) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Disya/shuttle-3-mini-Q4_K_M-GGUF --hf-file shuttle-3-mini-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Disya/shuttle-3-mini-Q4_K_M-GGUF --hf-file shuttle-3-mini-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Disya/shuttle-3-mini-Q4_K_M-GGUF --hf-file shuttle-3-mini-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Disya/shuttle-3-mini-Q4_K_M-GGUF --hf-file shuttle-3-mini-q4_k_m.gguf -c 2048 ```
zera09/qwen2.5-3b-fin-chat
zera09
2025-05-02T17:06:40Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-VL-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-02T16:53:10Z
--- base_model: Qwen/Qwen2.5-VL-3B-Instruct library_name: transformers model_name: qwen2.5-3b-fin-chat tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for qwen2.5-3b-fin-chat This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="zera09/qwen2.5-3b-fin-chat", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zeramarveenlyngkhoi/huggingface/runs/ariddybx) This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.4.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ArtusDev/Qwen3-235B-A22B-GGUF
ArtusDev
2025-05-02T17:05:09Z
4
2
null
[ "gguf", "imatrix", "qwen3_moe", "conversational", "ik_llama.cpp", "text-generation", "base_model:Qwen/Qwen3-235B-A22B", "base_model:quantized:Qwen/Qwen3-235B-A22B", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
2025-05-01T13:55:13Z
--- quantized_by: ArtusDev pipeline_tag: text-generation base_model: Qwen/Qwen3-235B-A22B license: mit base_model_relation: quantized tags: - imatrix - qwen3_moe - conversational - ik_llama.cpp --- ## `ik_llama.cpp` imatrix Quantizations of Qwen/Qwen3-235B-A22B This quant collection **REQUIRES** [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp/) fork to support advanced non-linear SotA quants. Do **not** download these big files and expect them to run on mainline vanilla llama.cpp, ollama, LM Studio, KoboldCpp, etc! These quants provide best in class quality for the given memory footprint. ## Big Thanks Shout out to [@ubergarm](https://huggingface.co/ubergarm) for his diligent work on ik_llama.cpp oriented quanting.
dgambettaphd/M_llm2_gen2_S_doc1000_synt64_lr1e-04_acm_SYNLAST
dgambettaphd
2025-05-02T17:04:12Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-02T17:03:52Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Resfir/KFC-Net-bio
Resfir
2025-05-02T16:59:58Z
0
0
null
[ "pytorch", "roberta", "region:us" ]
null
2025-05-02T16:44:58Z
# KFC-Net (Knowledge Fusion & Compression Net) Implementation of the framework described in **"Research on Multi-Task Biomedical Named Entity Recognition Method Based on Knowledge Distillation"**, accepted by Hohai University. ๐Ÿ”‘ **Core Features** - **Multi-Teacher Knowledge Fusion**: Aggregates predictions from single-task teachers via probability-space alignment. - **Lightweight Deployment**: Supports DistilBERT (253MB) and TinyBERT (54MB) with 7200 samples/sec inference speed. - **State-of-the-Art Performance**: Achieves 93.62% F1 on BC5CDR-Chem and 88.34% F1 on NCBI-Disease. ๐Ÿš€ **Usage** ```python from transformers import AutoTokenizer, AutoModelForTokenClassification model = AutoModelForTokenClassification.from_pretrained("your-username/KFC-Net-bio") tokenizer = AutoTokenizer.from_pretrained("your-username/KFC-Net-bio") text = "EGFR mutations increase sensitivity to gefitinib in non-small cell lung cancer." inputs = tokenizer(text, return_tensors="pt") outputs = model(**inputs).logits predictions = outputs.argmax(dim=-1).squeeze().tolist() ``` ๐Ÿ“ˆ **Performance** | Dataset | Precision | Recall | F1 | | ------------ | --------- | ------ | ------ | | NCBI-Disease | 86.87% | 89.86% | 88.34% | | BC5CDR-Chem | 94.48% | 92.77% | 93.62% | | BC2GM | 83.29% | 84.40% | 83.84% | โš ๏ธ **Limitations** - Performance drops observed on nested entities (e.g., "IL-2 receptor alpha chain"). - Requires alignment of entity type schemas across teachers. --- ### Key Changes from Original Template: 1. **Metadata Enhanced**: - Added `tags` for better discoverability (biomedical, NER, etc.). - Updated datasets to `bc5cdr_chem` for clarity. 2. **Technical Highlights**: - Emphasized probability-space fusion and deployment efficiency. - Added performance table with paper-reported metrics. 3. **Usage Code**: - Provided both `transformers` and `pipeline` examples. 4. **Transparency**: - Explicitly stated limitations (nested entities, schema alignment).
KingEmpire/sn21_omega_0205_3
KingEmpire
2025-05-02T16:57:24Z
0
0
null
[ "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-05-02T16:29:00Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
Ishwak1/bert-ufc-win-predictor
Ishwak1
2025-05-02T16:57:13Z
0
0
null
[ "safetensors", "distilbert", "text-classification", "ufc", "prediction", "sports", "en", "license:mit", "region:us" ]
text-classification
2025-05-02T05:18:44Z
--- language: en license: mit tags: - text-classification - ufc - prediction - sports --- # UFC Fight Outcome Predictor (DistilBERT-based) This model is a fine-tuned BERT classifier designed to predict the **outcome of UFC fights** based on textual inputs such as pre-fight analysis, fighter stats. It is trained as a **binary text classification** model. ## Use Case You can use this model to: - Predict likely fight outcomes from textual descriptions ## Model Details - **Base model**: `bert-base-uncased` - **Task**: Binary text classification (Win / Loss) - **Training data**: Custom UFC-related dataset - **Input**: Text (e.g., fighter matchups, stats) - **Output**: Binary class prediction (`0 = Fighter B wins`, `1 = Fighter A wins`) ## Example Usage (Python) ```python from transformers import DistilBertForSequenceClassification, DistilBertTokenizer loaded_model = DistilBertForSequenceClassification.from_pretrained("/content/fine_tuned_ufc_model") loaded_tokenizer = DistilBertTokenizer.from_pretrained("/content/fine_tuned_ufc_model") def predict_winner(fighter_a_stats, fighter_b_stats, model, tokenizer): input_text = ( f"Fighter A: {fighter_a_stats} || Fighter B: {fighter_b_stats}" ) inputs = tokenizer(input_text, return_tensors="pt", truncation=True, padding=True).to(device) outputs = model(**inputs) probs = torch.nn.functional.softmax(outputs.logits, dim=-1) pred = torch.argmax(probs, dim=1).item() return {"Fighter A wins": float(probs[0][0]), "Fighter B wins": float(probs[0][1])}, pred fighter_a = "Height: 73 in | Reach: 80 in | Str. Acc: 0.57 | Str. Def: 0.58 | SLpM: 4.25 | SApM: 2.12" fighter_b = "Height: 70 in | Reach: 71 in | Str. Acc: 0.49 | Str. Def: 0.55 | SLpM: 4.00 | SApM: 3.00" probs, winner = predict_winner(fighter_a, fighter_b, loaded_model, loaded_tokenizer) print(probs, "Winner Label (0=A, 1=B):", winner) // Example Output: {'Fighter A wins': 0.03644789755344391, 'Fighter B wins': 0.9635520577430725} Winner Label (0=A, 1=B): 1 ``` ## Files - model.safetensors: The model weights in safetensors format - config.json: Model architecture config - tokenizer_config.json, special_tokens_map.json, vocab.txt: Tokenizer files โœ๏ธ Author Created by @Ishwak1 ### For questions or fine-tuning on your own fight data, feel free to open a discussion!
phospho-app/Starkosaure-Stuffed_Animal_3cam_V0.0-rroebs93ru
phospho-app
2025-05-02T16:56:21Z
0
0
null
[ "phosphobot", "gr00t", "region:us" ]
null
2025-05-02T16:53:05Z
--- tags: - phosphobot - gr00t task_categories: - robotics --- # gr00t Model - phospho Training Pipeline ## Error Traceback We faced an issue while training your model. ``` Traceback (most recent call last): File "/root/src/helper.py", line 224, in predict raise RuntimeError(error_msg) RuntimeError: Training process failed with exit code 1: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 277, in apply_rotary_pos_emb q_embed = (q * cos) + (rotate_half(q) * sin) ^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 252, in rotate_half return torch.cat((-x2, x1), dim=-1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB. GPU 0 has a total capacity of 79.25 GiB of which 38.75 MiB is free. Process 17 has 79.21 GiB memory in use. Of the allocated memory 78.38 GiB is allocated by PyTorch, and 336.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) 0%| | 0/450 [00:23<?, ?it/s] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/root/src/helper.py", line 226, in predict raise RuntimeError(e) RuntimeError: Training process failed with exit code 1: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 277, in apply_rotary_pos_emb q_embed = (q * cos) + (rotate_half(q) * sin) ^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 252, in rotate_half return torch.cat((-x2, x1), dim=-1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB. GPU 0 has a total capacity of 79.25 GiB of which 38.75 MiB is free. Process 17 has 79.21 GiB memory in use. Of the allocated memory 78.38 GiB is allocated by PyTorch, and 336.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) 0%| | 0/450 [00:23<?, ?it/s] ``` ## Training parameters: - **Dataset**: [Starkosaure/Stuffed_Animal_3cam_V0.0](https://huggingface.co/datasets/Starkosaure/Stuffed_Animal_3cam_V0.0) - **Wandb run URL**: None - **Epochs**: 10 - **Batch size**: 64 - **Training steps**: 443 ๐Ÿ“– **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=replicate_groot_training_pipeline) ๐Ÿค– **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=replicate_groot_training_pipeline)
treasure4l/Gemma2-Instruct-DPO
treasure4l
2025-05-02T16:56:04Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "unsloth", "trl", "dpo", "arxiv:2305.18290", "base_model:unsloth/gemma-2-9b-it-bnb-4bit", "base_model:finetune:unsloth/gemma-2-9b-it-bnb-4bit", "endpoints_compatible", "region:us" ]
null
2025-05-02T16:55:50Z
--- base_model: unsloth/gemma-2-9b-it-bnb-4bit library_name: transformers model_name: Gemma2-Instruct-DPO tags: - generated_from_trainer - unsloth - trl - dpo licence: license --- # Model Card for Gemma2-Instruct-DPO This model is a fine-tuned version of [unsloth/gemma-2-9b-it-bnb-4bit](https://huggingface.co/unsloth/gemma-2-9b-it-bnb-4bit). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="treasure4l/Gemma2-Instruct-DPO", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/Thought-Aligner-7B-v1.0-GGUF
mradermacher
2025-05-02T16:53:33Z
0
0
transformers
[ "transformers", "gguf", "safety", "ai-safety", "aligner", "en", "base_model:fgdrg/Thought-Aligner-7B-v1.0", "base_model:quantized:fgdrg/Thought-Aligner-7B-v1.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-02T16:10:44Z
--- base_model: fgdrg/Thought-Aligner-7B-v1.0 language: - en library_name: transformers quantized_by: mradermacher tags: - safety - ai-safety - aligner --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/fgdrg/Thought-Aligner-7B-v1.0 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Thought-Aligner-7B-v1.0-GGUF/resolve/main/Thought-Aligner-7B-v1.0.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Thought-Aligner-7B-v1.0-GGUF/resolve/main/Thought-Aligner-7B-v1.0.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Thought-Aligner-7B-v1.0-GGUF/resolve/main/Thought-Aligner-7B-v1.0.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Thought-Aligner-7B-v1.0-GGUF/resolve/main/Thought-Aligner-7B-v1.0.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Thought-Aligner-7B-v1.0-GGUF/resolve/main/Thought-Aligner-7B-v1.0.IQ4_XS.gguf) | IQ4_XS | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Thought-Aligner-7B-v1.0-GGUF/resolve/main/Thought-Aligner-7B-v1.0.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Thought-Aligner-7B-v1.0-GGUF/resolve/main/Thought-Aligner-7B-v1.0.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Thought-Aligner-7B-v1.0-GGUF/resolve/main/Thought-Aligner-7B-v1.0.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Thought-Aligner-7B-v1.0-GGUF/resolve/main/Thought-Aligner-7B-v1.0.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Thought-Aligner-7B-v1.0-GGUF/resolve/main/Thought-Aligner-7B-v1.0.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Thought-Aligner-7B-v1.0-GGUF/resolve/main/Thought-Aligner-7B-v1.0.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Thought-Aligner-7B-v1.0-GGUF/resolve/main/Thought-Aligner-7B-v1.0.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
ajagota71/gpt-neo-125m-detox-epoch-60
ajagota71
2025-05-02T16:50:54Z
0
0
null
[ "safetensors", "gpt_neo", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2025-05-02T16:50:34Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: [More Information Needed] - Paper: [More Information Needed] - Docs: [More Information Needed]
Lucy-in-the-Sky/Qwen3-16B-A3B-Q8_0-GGUF
Lucy-in-the-Sky
2025-05-02T16:50:51Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:kalomaze/Qwen3-16B-A3B", "base_model:quantized:kalomaze/Qwen3-16B-A3B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-02T16:49:29Z
--- base_model: kalomaze/Qwen3-16B-A3B license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- # Lucy-in-the-Sky/Qwen3-16B-A3B-Q8_0-GGUF This model was converted to GGUF format from [`kalomaze/Qwen3-16B-A3B`](https://huggingface.co/kalomaze/Qwen3-16B-A3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/kalomaze/Qwen3-16B-A3B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Lucy-in-the-Sky/Qwen3-16B-A3B-Q8_0-GGUF --hf-file qwen3-16b-a3b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Lucy-in-the-Sky/Qwen3-16B-A3B-Q8_0-GGUF --hf-file qwen3-16b-a3b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Lucy-in-the-Sky/Qwen3-16B-A3B-Q8_0-GGUF --hf-file qwen3-16b-a3b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Lucy-in-the-Sky/Qwen3-16B-A3B-Q8_0-GGUF --hf-file qwen3-16b-a3b-q8_0.gguf -c 2048 ```
abkimc/PPO_LunarLander-v2
abkimc
2025-05-02T16:45:58Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-05-02T16:45:40Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 240.94 +/- 84.34 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
bruhzair/ignore-merge-2
bruhzair
2025-05-02T16:44:03Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T16:13:29Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # magnum2 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the Passthrough merge method. ### Models Merged The following models were included in the merge: * /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: bfloat16 merge_method: passthrough modules: default: slices: - sources: - layer_range: [0, 4] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 - sources: - layer_range: [2, 4] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [4, 8] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 - sources: - layer_range: [6, 8] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [8, 12] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 - sources: - layer_range: [10, 12] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [12, 16] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 - sources: - layer_range: [14, 16] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [16, 20] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 - sources: - layer_range: [18, 20] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [20, 24] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 - sources: - layer_range: [22, 24] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [24, 28] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 - sources: - layer_range: [26, 28] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [28, 32] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 - sources: - layer_range: [30, 32] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [32, 36] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 - sources: - layer_range: [34, 36] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [36, 40] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 - sources: - layer_range: [38, 40] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [40, 44] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 - sources: - layer_range: [42, 44] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [44, 48] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 - sources: - layer_range: [46, 48] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [48, 52] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 - sources: - layer_range: [50, 52] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [52, 56] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 - sources: - layer_range: [54, 56] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [56, 60] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 - sources: - layer_range: [58, 60] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [60, 64] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 - sources: - layer_range: [62, 64] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [64, 68] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 - sources: - layer_range: [66, 68] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [68, 72] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 - sources: - layer_range: [70, 72] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [72, 76] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 - sources: - layer_range: [74, 76] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [76, 80] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 - sources: - layer_range: [78, 80] model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-v4-SE/snapshots/da9dd890a3c92f6ebef577c5c42fa74ca97c9ff3 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 ```
arte-in/LenKimono
arte-in
2025-05-02T16:43:50Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:bsd", "region:us" ]
text-to-image
2025-05-02T16:42:51Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- The model ends in a powerful high-fashion runway pose: left leg extended forward, right leg slightly bent, torso twisted to the left, shoulders square, chin slightly tilted up. Her arms are relaxed at her sides, hands curved elegantly. Her facial expression is confident, alive, and intentional โ€” she holds eye contact with the camera, with a subtle intensity in her gaze. Her lips are gently closed, with a composed and focused expression, like a pro at the end of a major fashion show. Lighting is soft and even, neutral background, camera fixed. A professional high-fashion runway pose: confident and elegant, expressive look like at the end of a fashion show. The pose is sharp, refined, and poised. Soft studio lighting, neutral background, no scene changes. Camera remains fixed. Best quality 8K, sharp focus beautiful life face. parameters: negative_prompt: blurred,ugly output: url: images/_produkt.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: null license: bsd --- # LenKimono <Gallery /> ## Model description Len Kimono ## Download model Weights for this model are available in Safetensors format. [Download](/arte-in/LenKimono/tree/main) them in the Files & versions tab.
EdwardTurner/Qwen2.5-14B-Instruct_R_0_1_0_B_150_freeze
EdwardTurner
2025-05-02T16:42:45Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-02T16:19:45Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
chchen/MentaLLaMA-chat-7B-PsyCourse-doc-info-fold3
chchen
2025-05-02T16:42:36Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "base_model:klyang/MentaLLaMA-chat-7B-hf", "base_model:adapter:klyang/MentaLLaMA-chat-7B-hf", "license:mit", "region:us" ]
null
2025-05-02T14:59:34Z
--- library_name: peft license: mit base_model: klyang/MentaLLaMA-chat-7B-hf tags: - llama-factory - lora - generated_from_trainer model-index: - name: MentaLLaMA-chat-7B-PsyCourse-doc-info-fold3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MentaLLaMA-chat-7B-PsyCourse-doc-info-fold3 This model is a fine-tuned version of [klyang/MentaLLaMA-chat-7B-hf](https://huggingface.co/klyang/MentaLLaMA-chat-7B-hf) on the course-doc-info-train-fold3 dataset. It achieves the following results on the evaluation set: - Loss: 0.0798 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.3973 | 0.3951 | 10 | 0.4009 | | 0.2549 | 0.7901 | 20 | 0.2421 | | 0.172 | 1.1852 | 30 | 0.1696 | | 0.1705 | 1.5802 | 40 | 0.1428 | | 0.2097 | 1.9753 | 50 | 0.1237 | | 0.1157 | 2.3704 | 60 | 0.1085 | | 0.0902 | 2.7654 | 70 | 0.0961 | | 0.0917 | 3.1605 | 80 | 0.0900 | | 0.092 | 3.5556 | 90 | 0.0842 | | 0.0637 | 3.9506 | 100 | 0.0814 | | 0.0835 | 4.3457 | 110 | 0.0802 | | 0.0849 | 4.7407 | 120 | 0.0798 | ### Framework versions - PEFT 0.12.0 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
VincentG1234/QWEN_7BQLORA_finetuned_r8_alpha16
VincentG1234
2025-05-02T16:42:25Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2_vl", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-02T16:42:21Z
--- base_model: unsloth/qwen2-vl-7b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2_vl - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** VincentG1234 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2-vl-7b-instruct-unsloth-bnb-4bit This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
JQ1984/GDPR_clause_prediction_legalbert_model
JQ1984
2025-05-02T16:38:50Z
0
1
null
[ "safetensors", "bert", "text-classification", "en", "dataset:JQ1984/GDPRcasedata", "base_model:JQ1984/legalbert_gdpr_pretrained", "base_model:finetune:JQ1984/legalbert_gdpr_pretrained", "license:cc-by-sa-4.0", "region:us" ]
text-classification
2025-05-02T16:24:16Z
--- license: cc-by-sa-4.0 datasets: - JQ1984/GDPRcasedata language: - en metrics: - accuracy base_model: - JQ1984/legalbert_gdpr_pretrained pipeline_tag: text-classification ---
ail-sa/rahul_muscular_long_fs_cleaned_v1
ail-sa
2025-05-02T16:28:41Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-02T15:51:47Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Sid --- # Rahul_Muscular_Long_Fs_Cleaned_V1 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Sid` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Sid", "lora_weights": "https://huggingface.co/ail-sa/rahul_muscular_long_fs_cleaned_v1/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('ail-sa/rahul_muscular_long_fs_cleaned_v1', weight_name='lora.safetensors') image = pipeline('Sid').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/ail-sa/rahul_muscular_long_fs_cleaned_v1/discussions) to add images that show off what youโ€™ve made with this LoRA.
shubhamprshr/Llama-3.2-3B-Instruct_blocksworld6_sgrpo_balanced_0.5_0.5_True_300
shubhamprshr
2025-05-02T16:26:01Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "grpo", "conversational", "dataset:blocksworld-dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-3B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T13:38:36Z
--- base_model: meta-llama/Llama-3.2-3B-Instruct datasets: blocksworld-dataset library_name: transformers model_name: Llama-3.2-3B-Instruct_blocksworld6_sgrpo_balanced_0.5_0.5_True_300 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-3B-Instruct_blocksworld6_sgrpo_balanced_0.5_0.5_True_300 This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on the [blocksworld-dataset](https://huggingface.co/datasets/blocksworld-dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="shubhamprshr/Llama-3.2-3B-Instruct_blocksworld6_sgrpo_balanced_0.5_0.5_True_300", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/shubhamprshr27-tamu/BW2/runs/11btvsy2) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.14.0 - Transformers: 4.48.1 - Pytorch: 2.5.1 - Datasets: 3.1.0 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
chchen/Llama3-OpenBioLLM-8B-PsyCourse-info-fold5
chchen
2025-05-02T16:25:06Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "base_model:aaditya/Llama3-OpenBioLLM-8B", "base_model:adapter:aaditya/Llama3-OpenBioLLM-8B", "license:llama3", "region:us" ]
null
2025-05-02T15:27:21Z
--- library_name: peft license: llama3 base_model: aaditya/Llama3-OpenBioLLM-8B tags: - llama-factory - lora - generated_from_trainer model-index: - name: Llama3-OpenBioLLM-8B-PsyCourse-info-fold5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama3-OpenBioLLM-8B-PsyCourse-info-fold5 This model is a fine-tuned version of [aaditya/Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B) on the course-info-train-fold5 dataset. It achieves the following results on the evaluation set: - Loss: 0.1646 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5193 | 0.3951 | 10 | 0.4232 | | 0.2655 | 0.7901 | 20 | 0.2431 | | 0.1575 | 1.1852 | 30 | 0.2011 | | 0.1529 | 1.5802 | 40 | 0.1788 | | 0.1367 | 1.9753 | 50 | 0.1676 | | 0.1243 | 2.3704 | 60 | 0.1694 | | 0.0943 | 2.7654 | 70 | 0.1699 | | 0.0697 | 3.1605 | 80 | 0.1646 | | 0.056 | 3.5556 | 90 | 0.1669 | | 0.0546 | 3.9506 | 100 | 0.1724 | | 0.0512 | 4.3457 | 110 | 0.1724 | | 0.0789 | 4.7407 | 120 | 0.1712 | ### Framework versions - PEFT 0.12.0 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
lisabdunlap/Llama-3.1-8B-Instruct-unsloth-bnb-4bit-r32-e20-lr0.0002-mixed-markdown_format_small-new
lisabdunlap
2025-05-02T16:21:44Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit", "base_model:finetune:unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T16:19:19Z
--- base_model: unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** lisabdunlap - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
jq/gemma3-12b-ug40-lora-translation-r8-bs128
jq
2025-05-02T16:17:46Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma3", "trl", "en", "base_model:jq/gemma3-12b-ug40-pretrained", "base_model:finetune:jq/gemma3-12b-ug40-pretrained", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-02T16:17:33Z
--- base_model: jq/gemma3-12b-ug40-pretrained tags: - text-generation-inference - transformers - unsloth - gemma3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** jq - **License:** apache-2.0 - **Finetuned from model :** jq/gemma3-12b-ug40-pretrained This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Sakib323/MMfreeLM-370M-CodeGenerator
Sakib323
2025-05-02T16:16:14Z
0
0
transformers
[ "transformers", "safetensors", "hgrn_bit", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T16:14:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Vaunorage/gemma-3-1b-it-unsloth-bnb-4bit-pretrain-legis-quebec
Vaunorage
2025-05-02T16:13:48Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gemma-3-1b-it", "base_model:finetune:unsloth/gemma-3-1b-it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T16:13:03Z
--- base_model: unsloth/gemma-3-1b-it tags: - text-generation-inference - transformers - unsloth - gemma3_text license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Vaunorage - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-1b-it This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
nemo4aerobat/llama3.1_8b_cpt_compliance3
nemo4aerobat
2025-05-02T16:12:20Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-02T16:06:34Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** nemo4aerobat - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)