modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-05-23 00:40:17
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 474
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-05-23 00:38:52
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Forza-14/LIVE | Forza-14 | "2025-04-19T20:11:48Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-19T20:09:58Z" | [๐ดGO LIVE๐๐ข==โบโบ CLICK HERE TO STREAMING](https://tvstream.fun/mma/)
[๐ดSTREAMING๐๐ข==โบโบ CLICK HERE TO WATCH LIVE](https://tvstream.fun/mma/)
[<img alt="fsd" src="https://i.postimg.cc/zGBTGx5J/tv-image.gif">](https://tvstream.fun/mma/) |
MaryemOuichka/mistral_finetuned_ce_poste_est_pour_moi | MaryemOuichka | "2025-04-19T20:09:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T20:03:01Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MBZUAI/ArTSTv3 | MBZUAI | "2025-04-19T20:07:08Z" | 0 | 0 | null | [
"ar",
"arxiv:2110.07205",
"arxiv:2411.05872",
"license:mit",
"region:us"
] | null | "2025-04-19T18:10:03Z" | ---
license: mit
language:
- ar
---
## Checkpoints
### Pre-Trained Models
Model | Pre-train Dataset | Model | Tokenizer |
| --- | --- | --- | --- |
| ArTST v3 base | Multilingual | [Hugging Face](https://huggingface.co/MBZUAI/ArTSTv3/blob/main/pretrain_checkpoint.pt) | [Hugging Face](https://huggingface.co/MBZUAI/ArTSTv3/blob/main/tokenizer_artstv3.model)
# Acknowledgements
ArTST is built on [SpeechT5](https://arxiv.org/abs/2110.07205) Architecture. If you use any of ArTST models, please cite
```
@inproceedings{toyin2023artst,
title={ArTST: Arabic Text and Speech Transformer},
author={Toyin, Hawau and Djanibekov, Amirbek and Kulkarni, Ajinkya and Aldarmaki, Hanan},
booktitle={Proceedings of ArabicNLP 2023},
pages={41--51},
year={2023}
}
@misc{djanibekov2024dialectalcoveragegeneralizationarabic,
title={Dialectal Coverage And Generalization in Arabic Speech Recognition},
author={Amirbek Djanibekov and Hawau Olamide Toyin and Raghad Alshalan and Abdullah Alitr and Hanan Aldarmaki},
year={2024},
eprint={2411.05872},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2411.05872},
}
``` |
Aashish-Yadav/wATCH.Aashish-Yadav-Viral-Aashish-Yadav.original | Aashish-Yadav | "2025-04-19T20:07:07Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-19T20:03:06Z" | [๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://videohere.top/?Aashish-Yadav)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?Aashish-Yadav)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Aashish-Yadav) |
RawandLaouini/voice-of-arabic-v1 | RawandLaouini | "2025-04-19T20:06:42Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-04-19T19:26:09Z" | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: voice-of-arabic-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# voice-of-arabic-v1
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7317
- Wer: 1.2503
- Cer: 0.9468
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 150
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| 0.9828 | 0.0451 | 30 | 0.7317 | 1.2503 | 0.9468 |
| 0.4984 | 0.0901 | 60 | 0.4620 | 1.3132 | 2.6431 |
| 0.3416 | 0.1352 | 90 | 0.4144 | 5.7192 | 5.9769 |
| 0.3712 | 0.1802 | 120 | 0.3671 | 6.1371 | 6.3006 |
| 0.3128 | 0.2253 | 150 | 0.3042 | 7.4297 | 8.0200 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.4.1+cu121
- Datasets 3.5.0
- Tokenizers 0.21.1
|
NAFC-Super-Brawl/LIVE | NAFC-Super-Brawl | "2025-04-19T20:05:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-19T20:03:50Z" | [๐ดGO LIVE๐๐ข==โบโบ CLICK HERE TO STREAMING](https://tvstream.fun/mma/)
[๐ดSTREAMING๐๐ข==โบโบ CLICK HERE TO WATCH LIVE](https://tvstream.fun/mma/)
[<img alt="fsd" src="https://i.postimg.cc/zGBTGx5J/tv-image.gif">](https://tvstream.fun/mma/) |
naxwinn/tinyllama-1.1b-jarvis-qlora | naxwinn | "2025-04-19T20:04:48Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | "2025-04-19T20:04:42Z" | ---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
stardriver007/deepseek-6.7b-instruct-only-finetuned-v1 | stardriver007 | "2025-04-19T20:03:15Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T19:34:26Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Turalll/llama-3.2-1B-lora-instruct-classifier-110k | Turalll | "2025-04-19T20:02:09Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-19T20:02:04Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fedovtt/16459f45-01d7-4d7f-9074-a940b72ddd98 | fedovtt | "2025-04-19T20:01:58Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-04-19T18:52:25Z" | ---
library_name: peft
license: apache-2.0
base_model: teknium/OpenHermes-2.5-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 16459f45-01d7-4d7f-9074-a940b72ddd98
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: teknium/OpenHermes-2.5-Mistral-7B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0a97b13092c68341_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0a97b13092c68341_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: fedovtt/16459f45-01d7-4d7f-9074-a940b72ddd98
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 4
mixed_precision: bf16
mlflow_experiment_name: /tmp/0a97b13092c68341_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|im_end|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a34df82e-6929-46c2-aad9-f532243f79f7
wandb_project: 01-31
wandb_run: your_name
wandb_runid: a34df82e-6929-46c2-aad9-f532243f79f7
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 16459f45-01d7-4d7f-9074-a940b72ddd98
This model is a fine-tuned version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0113 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
DedeepyaP/empathetic-dialogues_generator | DedeepyaP | "2025-04-19T20:01:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-04-19T20:00:17Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Cage-Warriors-187/LIVE | Cage-Warriors-187 | "2025-04-19T20:01:27Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-19T19:59:53Z" | [๐ดGO LIVE๐๐ข==โบโบ CLICK HERE TO STREAMING](https://tvstream.fun/mma/)
[๐ดSTREAMING๐๐ข==โบโบ CLICK HERE TO WATCH LIVE](https://tvstream.fun/mma/)
[<img alt="fsd" src="https://i.postimg.cc/zGBTGx5J/tv-image.gif">](https://tvstream.fun/mma/) |
mradermacher/Violet_Magcap-12B-GGUF | mradermacher | "2025-04-19T20:00:09Z" | 0 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Nitral-AI/Violet_Magcap-12B",
"base_model:quantized:Nitral-AI/Violet_Magcap-12B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-19T10:22:48Z" | ---
base_model: Nitral-AI/Violet_Magcap-12B
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Nitral-AI/Violet_Magcap-12B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Violet_Magcap-12B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Violet_Magcap-12B-GGUF/resolve/main/Violet_Magcap-12B.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Violet_Magcap-12B-GGUF/resolve/main/Violet_Magcap-12B.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Violet_Magcap-12B-GGUF/resolve/main/Violet_Magcap-12B.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Violet_Magcap-12B-GGUF/resolve/main/Violet_Magcap-12B.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Violet_Magcap-12B-GGUF/resolve/main/Violet_Magcap-12B.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Violet_Magcap-12B-GGUF/resolve/main/Violet_Magcap-12B.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Violet_Magcap-12B-GGUF/resolve/main/Violet_Magcap-12B.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Violet_Magcap-12B-GGUF/resolve/main/Violet_Magcap-12B.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Violet_Magcap-12B-GGUF/resolve/main/Violet_Magcap-12B.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Violet_Magcap-12B-GGUF/resolve/main/Violet_Magcap-12B.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Violet_Magcap-12B-GGUF/resolve/main/Violet_Magcap-12B.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
raulgdp/Ministral-8B-Instruct-2410-JEP | raulgdp | "2025-04-19T19:58:55Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Ministral-8B-Instruct-2410",
"base_model:adapter:mistralai/Ministral-8B-Instruct-2410",
"license:other",
"region:us"
] | null | "2025-04-19T15:10:20Z" | ---
library_name: peft
license: other
base_model: mistralai/Ministral-8B-Instruct-2410
tags:
- generated_from_trainer
model-index:
- name: Ministral-8B-Instruct-2410-JEP
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Ministral-8B-Instruct-2410-JEP
This model is a fine-tuned version of [mistralai/Ministral-8B-Instruct-2410](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3744 | 0.1535 | 100 | 1.3521 |
| 1.287 | 0.3070 | 200 | 1.2976 |
| 1.2346 | 0.4605 | 300 | 1.2699 |
| 1.2384 | 0.6140 | 400 | 1.2527 |
| 1.2937 | 0.7675 | 500 | 1.2421 |
| 1.2046 | 0.9210 | 600 | 1.2340 |
| 1.1915 | 1.0737 | 700 | 1.2277 |
| 1.2159 | 1.2272 | 800 | 1.2253 |
| 1.1631 | 1.3807 | 900 | 1.2206 |
| 1.1935 | 1.5342 | 1000 | 1.2162 |
| 1.1701 | 1.6876 | 1100 | 1.2129 |
| 1.1925 | 1.8411 | 1200 | 1.2067 |
| 1.2215 | 1.9946 | 1300 | 1.2037 |
| 1.1858 | 2.1474 | 1400 | 1.2032 |
| 1.1737 | 2.3008 | 1500 | 1.2008 |
| 1.1751 | 2.4543 | 1600 | 1.1988 |
| 1.1514 | 2.6078 | 1700 | 1.1957 |
| 1.1327 | 2.7613 | 1800 | 1.1930 |
| 1.1266 | 2.9148 | 1900 | 1.1906 |
| 1.0929 | 3.0675 | 2000 | 1.1909 |
| 1.1054 | 3.2210 | 2100 | 1.1913 |
| 1.1097 | 3.3745 | 2200 | 1.1896 |
| 1.2006 | 3.5280 | 2300 | 1.1869 |
| 1.1605 | 3.6815 | 2400 | 1.1839 |
| 1.1155 | 3.8350 | 2500 | 1.1844 |
| 1.1481 | 3.9885 | 2600 | 1.1836 |
| 1.1011 | 4.1412 | 2700 | 1.1878 |
| 1.0627 | 4.2947 | 2800 | 1.1897 |
| 1.1387 | 4.4482 | 2900 | 1.1863 |
| 1.0656 | 4.6017 | 3000 | 1.1826 |
| 1.0951 | 4.7552 | 3100 | 1.1837 |
| 1.0806 | 4.9087 | 3200 | 1.1795 |
| 1.0508 | 5.0614 | 3300 | 1.1830 |
| 1.1051 | 5.2149 | 3400 | 1.1876 |
| 1.0061 | 5.3684 | 3500 | 1.1894 |
| 1.1471 | 5.5219 | 3600 | 1.1811 |
| 1.1143 | 5.6754 | 3700 | 1.1833 |
| 1.1146 | 5.8289 | 3800 | 1.1823 |
| 1.0648 | 5.9823 | 3900 | 1.1837 |
| 1.062 | 6.1351 | 4000 | 1.1903 |
| 1.065 | 6.2886 | 4100 | 1.1877 |
| 1.0379 | 6.4421 | 4200 | 1.1875 |
| 1.0188 | 6.5955 | 4300 | 1.1873 |
| 1.0332 | 6.7490 | 4400 | 1.1850 |
| 1.026 | 6.9025 | 4500 | 1.1854 |
| 1.0365 | 7.0553 | 4600 | 1.1897 |
| 1.0359 | 7.2087 | 4700 | 1.1928 |
| 1.0483 | 7.3622 | 4800 | 1.1921 |
| 0.9988 | 7.5157 | 4900 | 1.1914 |
| 1.0348 | 7.6692 | 5000 | 1.1893 |
| 0.9884 | 7.8227 | 5100 | 1.1879 |
| 1.0903 | 7.9762 | 5200 | 1.1890 |
| 0.9946 | 8.1289 | 5300 | 1.1942 |
| 1.0328 | 8.2824 | 5400 | 1.1941 |
| 1.0031 | 8.4359 | 5500 | 1.1949 |
| 0.9096 | 8.5894 | 5600 | 1.1946 |
| 1.018 | 8.7429 | 5700 | 1.1939 |
| 1.0533 | 8.8964 | 5800 | 1.1920 |
| 0.9476 | 9.0491 | 5900 | 1.1967 |
| 0.9817 | 9.2026 | 6000 | 1.1989 |
| 0.9774 | 9.3561 | 6100 | 1.1987 |
| 1.0092 | 9.5096 | 6200 | 1.1974 |
| 1.0067 | 9.6631 | 6300 | 1.1977 |
| 1.0243 | 9.8166 | 6400 | 1.1983 |
| 0.9359 | 9.9701 | 6500 | 1.1977 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1 |
RosannaMui/llama-3.1-fine-tuned-model | RosannaMui | "2025-04-19T19:57:35Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-04-18T18:04:21Z" | ---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: transformers
model_name: llama-3.1-fine-tuned-model
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for llama-3.1-fine-tuned-model
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RosannaMui/llama-3.1-fine-tuned-model", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu121
- Datasets: 3.5.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
stewy33/Llama-3.3-70B-Instruct-Reference-1_3-ccf66f95 | stewy33 | "2025-04-19T19:52:31Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | "2025-04-19T19:51:08Z" | ---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
jekunz/smollm360m-da1-is1-ties | jekunz | "2025-04-19T19:52:22Z" | 0 | 0 | null | [
"safetensors",
"llama",
"merge",
"mergekit",
"lazymergekit",
"jekunz/smollm-360m-cpt-fineweb-icelandic",
"jekunz/smollm-360m-cpt-fineweb-danish",
"base_model:jekunz/smollm-360m-cpt-fineweb-danish",
"base_model:merge:jekunz/smollm-360m-cpt-fineweb-danish",
"base_model:jekunz/smollm-360m-cpt-fineweb-icelandic",
"base_model:merge:jekunz/smollm-360m-cpt-fineweb-icelandic",
"region:us"
] | null | "2025-04-19T19:51:34Z" | ---
base_model:
- jekunz/smollm-360m-cpt-fineweb-icelandic
- jekunz/smollm-360m-cpt-fineweb-danish
tags:
- merge
- mergekit
- lazymergekit
- jekunz/smollm-360m-cpt-fineweb-icelandic
- jekunz/smollm-360m-cpt-fineweb-danish
---
# smollm360m-da1-is1-ties
smollm360m-da1-is1-ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [jekunz/smollm-360m-cpt-fineweb-icelandic](https://huggingface.co/jekunz/smollm-360m-cpt-fineweb-icelandic)
* [jekunz/smollm-360m-cpt-fineweb-danish](https://huggingface.co/jekunz/smollm-360m-cpt-fineweb-danish)
## ๐งฉ Configuration
```yaml
models:
- model: jekunz/smollm-360m-cpt-fineweb-icelandic
parameters:
density: 0.5
weight: 1.0
- model: jekunz/smollm-360m-cpt-fineweb-danish
parameters:
density: 0.5
weight: 1.0
merge_method: ties
base_model: HuggingFaceTB/SmolLM2-360M-Instruct
parameters:
normalize: true
dtype: float16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "jekunz/smollm360m-da1-is1-ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
mergekit-community/mergekit-slerp-fojmdcf | mergekit-community | "2025-04-19T19:52:06Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:XCryptoniusX/Kaolinite-Kitara-12B",
"base_model:merge:XCryptoniusX/Kaolinite-Kitara-12B",
"base_model:mergekit-community/mergekit-passthrough-gujurtn",
"base_model:merge:mergekit-community/mergekit-passthrough-gujurtn",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T19:44:14Z" | ---
base_model:
- XCryptoniusX/Kaolinite-Kitara-12B
- mergekit-community/mergekit-passthrough-gujurtn
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [XCryptoniusX/Kaolinite-Kitara-12B](https://huggingface.co/XCryptoniusX/Kaolinite-Kitara-12B)
* [mergekit-community/mergekit-passthrough-gujurtn](https://huggingface.co/mergekit-community/mergekit-passthrough-gujurtn)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mergekit-community/mergekit-passthrough-gujurtn
- model: XCryptoniusX/Kaolinite-Kitara-12B
merge_method: slerp
base_model: XCryptoniusX/Kaolinite-Kitara-12B
dtype: bfloat16
tokenizer_source: union
parameters:
t: [0.1, 0.2, 0.4, 0.8, 0.4, 0.2, 0.1]
```
|
kokovova/9293d487-d9ad-4300-b8a4-71d9f74b698a | kokovova | "2025-04-19T19:50:21Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-04-19T19:42:23Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9293d487-d9ad-4300-b8a4-71d9f74b698a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-1.5B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a2a84683a8e3b451_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a2a84683a8e3b451_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: kokovova/9293d487-d9ad-4300-b8a4-71d9f74b698a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/a2a84683a8e3b451_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8d439d31-9639-4a45-9afe-9045b1ec9043
wandb_project: 01-31
wandb_run: your_name
wandb_runid: 8d439d31-9639-4a45-9afe-9045b1ec9043
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9293d487-d9ad-4300-b8a4-71d9f74b698a
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.3001 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
gabrielc2025/ppo-LunarLander-v2 | gabrielc2025 | "2025-04-19T19:50:19Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2025-04-19T19:50:00Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 271.96 +/- 18.04
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Aden23/william | Aden23 | "2025-04-19T19:47:48Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-04-19T19:18:16Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: william
---
# William
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `william` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "william",
"lora_weights": "https://huggingface.co/Aden23/william/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Aden23/william', weight_name='lora.safetensors')
image = pipeline('william').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Aden23/william/discussions) to add images that show off what youโve made with this LoRA.
|
rbelanec/train_rte_1744902665 | rbelanec | "2025-04-19T19:46:56Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lntuning",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | "2025-04-19T09:42:54Z" | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.3
tags:
- llama-factory
- lntuning
- generated_from_trainer
model-index:
- name: train_rte_1744902665
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_rte_1744902665
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on the rte dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0704
- Num Input Tokens Seen: 107274480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- training_steps: 40000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:--------:|:-----:|:---------------:|:-----------------:|
| 0.0864 | 1.4207 | 200 | 0.1323 | 540280 |
| 0.079 | 2.8414 | 400 | 0.0969 | 1077480 |
| 0.0663 | 4.2567 | 600 | 0.0895 | 1609584 |
| 0.0931 | 5.6774 | 800 | 0.0865 | 2150192 |
| 0.059 | 7.0927 | 1000 | 0.0837 | 2681640 |
| 0.0487 | 8.5134 | 1200 | 0.0818 | 3218528 |
| 0.0693 | 9.9340 | 1400 | 0.0799 | 3757240 |
| 0.0978 | 11.3494 | 1600 | 0.0791 | 4292384 |
| 0.0802 | 12.7701 | 1800 | 0.0775 | 4828992 |
| 0.0499 | 14.1854 | 2000 | 0.0768 | 5364048 |
| 0.0613 | 15.6061 | 2200 | 0.0752 | 5901512 |
| 0.0475 | 17.0214 | 2400 | 0.0745 | 6435768 |
| 0.0849 | 18.4421 | 2600 | 0.0741 | 6974976 |
| 0.0483 | 19.8627 | 2800 | 0.0733 | 7509488 |
| 0.0533 | 21.2781 | 3000 | 0.0735 | 8041736 |
| 0.0662 | 22.6988 | 3200 | 0.0715 | 8583128 |
| 0.0585 | 24.1141 | 3400 | 0.0720 | 9117488 |
| 0.0536 | 25.5348 | 3600 | 0.0720 | 9649136 |
| 0.0489 | 26.9554 | 3800 | 0.0714 | 10191288 |
| 0.0498 | 28.3708 | 4000 | 0.0714 | 10724032 |
| 0.0432 | 29.7914 | 4200 | 0.0711 | 11259816 |
| 0.0535 | 31.2068 | 4400 | 0.0715 | 11805200 |
| 0.0312 | 32.6275 | 4600 | 0.0715 | 12337832 |
| 0.0349 | 34.0428 | 4800 | 0.0714 | 12874672 |
| 0.0412 | 35.4635 | 5000 | 0.0709 | 13408200 |
| 0.0597 | 36.8841 | 5200 | 0.0715 | 13943952 |
| 0.0342 | 38.2995 | 5400 | 0.0704 | 14478600 |
| 0.059 | 39.7201 | 5600 | 0.0704 | 15021728 |
| 0.0522 | 41.1355 | 5800 | 0.0709 | 15548872 |
| 0.0295 | 42.5561 | 6000 | 0.0710 | 16082664 |
| 0.0325 | 43.9768 | 6200 | 0.0711 | 16624832 |
| 0.044 | 45.3922 | 6400 | 0.0711 | 17152040 |
| 0.0588 | 46.8128 | 6600 | 0.0719 | 17696104 |
| 0.0341 | 48.2282 | 6800 | 0.0721 | 18228312 |
| 0.0292 | 49.6488 | 7000 | 0.0730 | 18767376 |
| 0.0316 | 51.0642 | 7200 | 0.0735 | 19300560 |
| 0.0283 | 52.4848 | 7400 | 0.0736 | 19837208 |
| 0.0167 | 53.9055 | 7600 | 0.0731 | 20381384 |
| 0.0312 | 55.3209 | 7800 | 0.0762 | 20917960 |
| 0.0274 | 56.7415 | 8000 | 0.0755 | 21456616 |
| 0.0414 | 58.1569 | 8200 | 0.0755 | 21988808 |
| 0.0384 | 59.5775 | 8400 | 0.0778 | 22526872 |
| 0.0395 | 60.9982 | 8600 | 0.0762 | 23067872 |
| 0.0354 | 62.4135 | 8800 | 0.0781 | 23599328 |
| 0.0255 | 63.8342 | 9000 | 0.0781 | 24138832 |
| 0.0292 | 65.2496 | 9200 | 0.0791 | 24675016 |
| 0.0233 | 66.6702 | 9400 | 0.0797 | 25209352 |
| 0.022 | 68.0856 | 9600 | 0.0810 | 25745352 |
| 0.0069 | 69.5062 | 9800 | 0.0829 | 26284824 |
| 0.0125 | 70.9269 | 10000 | 0.0825 | 26824264 |
| 0.0052 | 72.3422 | 10200 | 0.0870 | 27363992 |
| 0.0222 | 73.7629 | 10400 | 0.0839 | 27904360 |
| 0.0177 | 75.1783 | 10600 | 0.0872 | 28436064 |
| 0.0359 | 76.5989 | 10800 | 0.0884 | 28976440 |
| 0.022 | 78.0143 | 11000 | 0.0893 | 29511840 |
| 0.0055 | 79.4349 | 11200 | 0.0917 | 30049440 |
| 0.011 | 80.8556 | 11400 | 0.0915 | 30590008 |
| 0.0186 | 82.2709 | 11600 | 0.0956 | 31127008 |
| 0.0242 | 83.6916 | 11800 | 0.0971 | 31665584 |
| 0.0262 | 85.1070 | 12000 | 0.0980 | 32199088 |
| 0.0126 | 86.5276 | 12200 | 0.1010 | 32739240 |
| 0.0115 | 87.9483 | 12400 | 0.1037 | 33281296 |
| 0.0202 | 89.3636 | 12600 | 0.1061 | 33819016 |
| 0.0209 | 90.7843 | 12800 | 0.1083 | 34356400 |
| 0.0078 | 92.1996 | 13000 | 0.1106 | 34889896 |
| 0.0097 | 93.6203 | 13200 | 0.1133 | 35429768 |
| 0.0048 | 95.0357 | 13400 | 0.1138 | 35969976 |
| 0.0062 | 96.4563 | 13600 | 0.1164 | 36505712 |
| 0.0024 | 97.8770 | 13800 | 0.1196 | 37036976 |
| 0.0023 | 99.2923 | 14000 | 0.1213 | 37570400 |
| 0.0026 | 100.7130 | 14200 | 0.1236 | 38103616 |
| 0.003 | 102.1283 | 14400 | 0.1292 | 38636544 |
| 0.0026 | 103.5490 | 14600 | 0.1275 | 39171560 |
| 0.0083 | 104.9697 | 14800 | 0.1316 | 39706992 |
| 0.0014 | 106.3850 | 15000 | 0.1339 | 40239280 |
| 0.0084 | 107.8057 | 15200 | 0.1374 | 40778072 |
| 0.0061 | 109.2210 | 15400 | 0.1412 | 41312720 |
| 0.0024 | 110.6417 | 15600 | 0.1484 | 41845224 |
| 0.0029 | 112.0570 | 15800 | 0.1469 | 42384256 |
| 0.0014 | 113.4777 | 16000 | 0.1485 | 42925008 |
| 0.0015 | 114.8984 | 16200 | 0.1511 | 43462528 |
| 0.004 | 116.3137 | 16400 | 0.1549 | 43999968 |
| 0.0013 | 117.7344 | 16600 | 0.1557 | 44533664 |
| 0.0008 | 119.1497 | 16800 | 0.1616 | 45067976 |
| 0.0021 | 120.5704 | 17000 | 0.1608 | 45610752 |
| 0.0015 | 121.9911 | 17200 | 0.1639 | 46147416 |
| 0.0012 | 123.4064 | 17400 | 0.1689 | 46682792 |
| 0.0013 | 124.8271 | 17600 | 0.1701 | 47218688 |
| 0.0119 | 126.2424 | 17800 | 0.1766 | 47751176 |
| 0.0007 | 127.6631 | 18000 | 0.1814 | 48286872 |
| 0.0031 | 129.0784 | 18200 | 0.1835 | 48824840 |
| 0.0041 | 130.4991 | 18400 | 0.1855 | 49361064 |
| 0.0042 | 131.9198 | 18600 | 0.1927 | 49893616 |
| 0.0004 | 133.3351 | 18800 | 0.1908 | 50425120 |
| 0.0004 | 134.7558 | 19000 | 0.1944 | 50963088 |
| 0.0006 | 136.1711 | 19200 | 0.2051 | 51496048 |
| 0.0003 | 137.5918 | 19400 | 0.2001 | 52038608 |
| 0.0007 | 139.0071 | 19600 | 0.2065 | 52575544 |
| 0.0003 | 140.4278 | 19800 | 0.2146 | 53114912 |
| 0.0003 | 141.8485 | 20000 | 0.2164 | 53657368 |
| 0.0022 | 143.2638 | 20200 | 0.2204 | 54195776 |
| 0.0002 | 144.6845 | 20400 | 0.2224 | 54722232 |
| 0.0006 | 146.0998 | 20600 | 0.2283 | 55255168 |
| 0.0006 | 147.5205 | 20800 | 0.2333 | 55786616 |
| 0.0004 | 148.9412 | 21000 | 0.2350 | 56322200 |
| 0.0003 | 150.3565 | 21200 | 0.2438 | 56860136 |
| 0.0002 | 151.7772 | 21400 | 0.2434 | 57396560 |
| 0.0001 | 153.1925 | 21600 | 0.2479 | 57930904 |
| 0.0001 | 154.6132 | 21800 | 0.2529 | 58469832 |
| 0.0001 | 156.0285 | 22000 | 0.2553 | 59001744 |
| 0.0001 | 157.4492 | 22200 | 0.2570 | 59542632 |
| 0.001 | 158.8699 | 22400 | 0.2659 | 60077280 |
| 0.0003 | 160.2852 | 22600 | 0.2696 | 60614824 |
| 0.0002 | 161.7059 | 22800 | 0.2692 | 61145384 |
| 0.0002 | 163.1212 | 23000 | 0.2708 | 61678824 |
| 0.0001 | 164.5419 | 23200 | 0.2757 | 62213064 |
| 0.0001 | 165.9626 | 23400 | 0.2784 | 62746840 |
| 0.0 | 167.3779 | 23600 | 0.2879 | 63279640 |
| 0.0001 | 168.7986 | 23800 | 0.2873 | 63817648 |
| 0.0 | 170.2139 | 24000 | 0.2914 | 64355456 |
| 0.0 | 171.6346 | 24200 | 0.2951 | 64891336 |
| 0.0 | 173.0499 | 24400 | 0.2955 | 65431304 |
| 0.0 | 174.4706 | 24600 | 0.2949 | 65971176 |
| 0.0001 | 175.8913 | 24800 | 0.3027 | 66508200 |
| 0.0001 | 177.3066 | 25000 | 0.3048 | 67044512 |
| 0.0 | 178.7273 | 25200 | 0.3058 | 67581248 |
| 0.0 | 180.1426 | 25400 | 0.3092 | 68116280 |
| 0.0 | 181.5633 | 25600 | 0.3119 | 68654016 |
| 0.0 | 182.9840 | 25800 | 0.3177 | 69191168 |
| 0.0 | 184.3993 | 26000 | 0.3174 | 69725736 |
| 0.0001 | 185.8200 | 26200 | 0.3201 | 70266432 |
| 0.0 | 187.2353 | 26400 | 0.3265 | 70795080 |
| 0.0 | 188.6560 | 26600 | 0.3255 | 71337664 |
| 0.0 | 190.0713 | 26800 | 0.3332 | 71873944 |
| 0.0001 | 191.4920 | 27000 | 0.3330 | 72406760 |
| 0.0 | 192.9127 | 27200 | 0.3379 | 72941856 |
| 0.0 | 194.3280 | 27400 | 0.3333 | 73486320 |
| 0.0 | 195.7487 | 27600 | 0.3379 | 74024784 |
| 0.0 | 197.1640 | 27800 | 0.3372 | 74562272 |
| 0.0 | 198.5847 | 28000 | 0.3413 | 75101016 |
| 0.0 | 200.0 | 28200 | 0.3469 | 75632576 |
| 0.0 | 201.4207 | 28400 | 0.3473 | 76166696 |
| 0.0 | 202.8414 | 28600 | 0.3513 | 76703192 |
| 0.0 | 204.2567 | 28800 | 0.3588 | 77237304 |
| 0.0 | 205.6774 | 29000 | 0.3607 | 77775808 |
| 0.0 | 207.0927 | 29200 | 0.3624 | 78304552 |
| 0.0 | 208.5134 | 29400 | 0.3567 | 78842312 |
| 0.0 | 209.9340 | 29600 | 0.3637 | 79379384 |
| 0.0 | 211.3494 | 29800 | 0.3648 | 79916200 |
| 0.0 | 212.7701 | 30000 | 0.3697 | 80450848 |
| 0.0 | 214.1854 | 30200 | 0.3757 | 80978696 |
| 0.0 | 215.6061 | 30400 | 0.3725 | 81517864 |
| 0.0 | 217.0214 | 30600 | 0.3748 | 82057360 |
| 0.0 | 218.4421 | 30800 | 0.3792 | 82601680 |
| 0.0 | 219.8627 | 31000 | 0.3769 | 83137640 |
| 0.0 | 221.2781 | 31200 | 0.3801 | 83674536 |
| 0.0 | 222.6988 | 31400 | 0.3842 | 84215064 |
| 0.0 | 224.1141 | 31600 | 0.3857 | 84750440 |
| 0.0 | 225.5348 | 31800 | 0.3825 | 85284976 |
| 0.0 | 226.9554 | 32000 | 0.3818 | 85820408 |
| 0.0 | 228.3708 | 32200 | 0.3894 | 86358288 |
| 0.0 | 229.7914 | 32400 | 0.3895 | 86896432 |
| 0.0 | 231.2068 | 32600 | 0.3825 | 87433496 |
| 0.0 | 232.6275 | 32800 | 0.3906 | 87969480 |
| 0.0 | 234.0428 | 33000 | 0.3918 | 88503984 |
| 0.0 | 235.4635 | 33200 | 0.3934 | 89043584 |
| 0.0 | 236.8841 | 33400 | 0.4044 | 89572896 |
| 0.0 | 238.2995 | 33600 | 0.3927 | 90114360 |
| 0.0 | 239.7201 | 33800 | 0.4034 | 90650032 |
| 0.0 | 241.1355 | 34000 | 0.4063 | 91178208 |
| 0.0 | 242.5561 | 34200 | 0.4017 | 91712168 |
| 0.0 | 243.9768 | 34400 | 0.4046 | 92253832 |
| 0.0 | 245.3922 | 34600 | 0.4086 | 92783304 |
| 0.0 | 246.8128 | 34800 | 0.4016 | 93323088 |
| 0.0 | 248.2282 | 35000 | 0.4019 | 93858448 |
| 0.0 | 249.6488 | 35200 | 0.4071 | 94391144 |
| 0.0 | 251.0642 | 35400 | 0.3990 | 94929608 |
| 0.0 | 252.4848 | 35600 | 0.4011 | 95474424 |
| 0.0 | 253.9055 | 35800 | 0.4070 | 96007792 |
| 0.0 | 255.3209 | 36000 | 0.3991 | 96546584 |
| 0.0 | 256.7415 | 36200 | 0.4101 | 97077888 |
| 0.0 | 258.1569 | 36400 | 0.3991 | 97612368 |
| 0.0 | 259.5775 | 36600 | 0.4082 | 98151616 |
| 0.0 | 260.9982 | 36800 | 0.4057 | 98684232 |
| 0.0 | 262.4135 | 37000 | 0.4145 | 99220560 |
| 0.0 | 263.8342 | 37200 | 0.4050 | 99758136 |
| 0.0 | 265.2496 | 37400 | 0.4118 | 100296152 |
| 0.0 | 266.6702 | 37600 | 0.4149 | 100836120 |
| 0.0 | 268.0856 | 37800 | 0.4066 | 101372264 |
| 0.0 | 269.5062 | 38000 | 0.4120 | 101912112 |
| 0.0 | 270.9269 | 38200 | 0.4087 | 102446016 |
| 0.0 | 272.3422 | 38400 | 0.4136 | 102980360 |
| 0.0 | 273.7629 | 38600 | 0.4182 | 103519296 |
| 0.0 | 275.1783 | 38800 | 0.4100 | 104053200 |
| 0.0 | 276.5989 | 39000 | 0.4106 | 104594720 |
| 0.0 | 278.0143 | 39200 | 0.4107 | 105126640 |
| 0.0 | 279.4349 | 39400 | 0.4083 | 105660640 |
| 0.0 | 280.8556 | 39600 | 0.4118 | 106198248 |
| 0.0 | 282.2709 | 39800 | 0.4026 | 106737720 |
| 0.0 | 283.6916 | 40000 | 0.4115 | 107274480 |
### Framework versions
- PEFT 0.15.1
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
hZzy/mistral-7b-expo-7b-L2EXPO-25-smallr-1 | hZzy | "2025-04-19T19:46:54Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"alignment-handbook",
"ndcg",
"trl",
"expo",
"generated_from_trainer",
"dataset:hZzy/direction_right2",
"base_model:hZzy/mistral-7b-sft-25-1",
"base_model:adapter:hZzy/mistral-7b-sft-25-1",
"license:apache-2.0",
"region:us"
] | null | "2025-04-19T12:55:26Z" | ---
base_model: hZzy/mistral-7b-sft-25-1
datasets:
- hZzy/direction_right2
library_name: peft
license: apache-2.0
tags:
- alignment-handbook
- ndcg
- trl
- expo
- generated_from_trainer
model-index:
- name: mistral-7b-expo-7b-L2EXPO-25-smallr-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/zhiyuzha-university-of-florida/huggingface/runs/5ppb21pi)
# mistral-7b-expo-7b-L2EXPO-25-smallr-1
This model is a fine-tuned version of [hZzy/mistral-7b-sft-25-1](https://huggingface.co/hZzy/mistral-7b-sft-25-1) on the hZzy/direction_right2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4531
- Objective: 0.4545
- Reward Accuracy: 0.6563
- Logp Accuracy: 0.6493
- Log Diff Policy: 15.4503
- Chosen Logps: -148.5083
- Rejected Logps: -163.9586
- Chosen Rewards: -0.5383
- Rejected Rewards: -0.6889
- Logits: -2.1895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- gradient_accumulation_steps: 12
- total_train_batch_size: 108
- total_eval_batch_size: 9
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Objective | Reward Accuracy | Logp Accuracy | Log Diff Policy | Chosen Logps | Rejected Logps | Chosen Rewards | Rejected Rewards | Logits |
|:-------------:|:------:|:----:|:---------------:|:---------:|:---------------:|:-------------:|:---------------:|:------------:|:--------------:|:--------------:|:----------------:|:-------:|
| 0.5866 | 0.0758 | 50 | 0.5114 | 0.5084 | 0.5481 | 0.5168 | 0.4644 | -93.1551 | -93.6196 | 0.0153 | 0.0144 | -2.2005 |
| 0.6029 | 0.1517 | 100 | 0.5040 | 0.5011 | 0.5741 | 0.5316 | 1.3657 | -93.8686 | -95.2344 | 0.0081 | -0.0017 | -2.1831 |
| 0.6165 | 0.2275 | 150 | 0.4877 | 0.4856 | 0.5970 | 0.5741 | 5.4287 | -98.6006 | -104.0293 | -0.0392 | -0.0896 | -2.0756 |
| 0.5324 | 0.3033 | 200 | 0.4748 | 0.4791 | 0.6172 | 0.6110 | 9.8505 | -116.5340 | -126.3844 | -0.2185 | -0.3132 | -2.1418 |
| 0.5089 | 0.3792 | 250 | 0.4679 | 0.4712 | 0.6306 | 0.6222 | 11.0787 | -118.1832 | -129.2619 | -0.2350 | -0.3420 | -2.2452 |
| 0.5254 | 0.4550 | 300 | 0.4669 | 0.4693 | 0.6479 | 0.6387 | 14.2479 | -134.9546 | -149.2025 | -0.4027 | -0.5414 | -2.1789 |
| 0.4904 | 0.5308 | 350 | 0.4571 | 0.4582 | 0.6477 | 0.6423 | 12.9700 | -138.0092 | -150.9792 | -0.4333 | -0.5591 | -2.2293 |
| 0.4722 | 0.6067 | 400 | 0.4556 | 0.4563 | 0.6521 | 0.6479 | 13.8030 | -127.5593 | -141.3622 | -0.3288 | -0.4630 | -2.2377 |
| 0.4716 | 0.6825 | 450 | 0.4574 | 0.4604 | 0.6518 | 0.6443 | 15.1329 | -157.4561 | -172.5890 | -0.6277 | -0.7752 | -2.1945 |
| 0.5051 | 0.7583 | 500 | 0.4571 | 0.4591 | 0.6535 | 0.6513 | 15.8245 | -148.2936 | -164.1181 | -0.5361 | -0.6905 | -2.2074 |
| 0.4423 | 0.8342 | 550 | 0.4539 | 0.4550 | 0.6527 | 0.6513 | 15.3717 | -145.5679 | -160.9395 | -0.5089 | -0.6588 | -2.2040 |
| 0.465 | 0.9100 | 600 | 0.4529 | 0.4543 | 0.6549 | 0.6485 | 15.3658 | -148.1466 | -163.5124 | -0.5346 | -0.6845 | -2.1926 |
| 0.5092 | 0.9858 | 650 | 0.4531 | 0.4545 | 0.6541 | 0.6490 | 15.4559 | -148.5047 | -163.9607 | -0.5382 | -0.6890 | -2.1898 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.0
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.19.1 |
abhay2812/gemma-3-1b-it-bnb-4bit-grpo | abhay2812 | "2025-04-19T19:43:38Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-it",
"base_model:finetune:unsloth/gemma-3-1b-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T19:29:15Z" | ---
base_model: unsloth/gemma-3-1b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** abhay2812
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dzanbek/6dbc6455-0576-4c97-86c2-16669e886773 | dzanbek | "2025-04-19T19:37:40Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b",
"base_model:adapter:unsloth/gemma-2-2b",
"license:gemma",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-04-19T19:30:14Z" | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6dbc6455-0576-4c97-86c2-16669e886773
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- cc5c269dbd02a462_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/cc5c269dbd02a462_train_data.json
type:
field_input: metadata
field_instruction: prompt
field_output: cluster_description
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: dzanbek/6dbc6455-0576-4c97-86c2-16669e886773
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/cc5c269dbd02a462_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1f3e75a6-38eb-4ec5-b605-d3730aad6fbb
wandb_project: 01-31
wandb_run: your_name
wandb_runid: 1f3e75a6-38eb-4ec5-b605-d3730aad6fbb
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 6dbc6455-0576-4c97-86c2-16669e886773
This model is a fine-tuned version of [unsloth/gemma-2-2b](https://huggingface.co/unsloth/gemma-2-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3144
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3477 | 0.1354 | 150 | 2.3144 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
TOMFORD79/Candy_12 | TOMFORD79 | "2025-04-19T19:35:25Z" | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-04-19T18:36:17Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
TOMFORD79/Candy_10 | TOMFORD79 | "2025-04-19T19:34:58Z" | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-04-19T18:36:06Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
icyBear02/qwen-finance-lora | icyBear02 | "2025-04-19T19:34:34Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-19T19:34:28Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kokovova/94dcee8b-c7c1-4e6f-b697-891184dec89e | kokovova | "2025-04-19T19:32:09Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-0.5B",
"base_model:adapter:Qwen/Qwen2-0.5B",
"license:apache-2.0",
"region:us"
] | null | "2025-04-19T19:29:30Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 94dcee8b-c7c1-4e6f-b697-891184dec89e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-0.5B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f6a3173bd490817c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f6a3173bd490817c_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: kokovova/94dcee8b-c7c1-4e6f-b697-891184dec89e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/f6a3173bd490817c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3455ae01-41f8-4501-91f4-42e88822a586
wandb_project: 01-31
wandb_run: your_name
wandb_runid: 3455ae01-41f8-4501-91f4-42e88822a586
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 94dcee8b-c7c1-4e6f-b697-891184dec89e
This model is a fine-tuned version of [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4326 | 0.2992 | 200 | 1.4299 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
shibajustfor/1a050358-c285-427c-941f-00e0cd3faadc | shibajustfor | "2025-04-19T19:29:13Z" | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:cognitivecomputations/Samantha-1.11-70b",
"base_model:adapter:cognitivecomputations/Samantha-1.11-70b",
"region:us"
] | null | "2025-04-19T19:27:33Z" | ---
library_name: peft
tags:
- generated_from_trainer
base_model: cognitivecomputations/Samantha-1.11-70b
model-index:
- name: shibajustfor/1a050358-c285-427c-941f-00e0cd3faadc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shibajustfor/1a050358-c285-427c-941f-00e0cd3faadc
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0036
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
dzanbek/95b025bb-5456-4569-9dbf-223c1bf753b9 | dzanbek | "2025-04-19T19:29:12Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-3-8b",
"base_model:adapter:unsloth/llama-3-8b",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-04-19T18:57:36Z" | ---
library_name: peft
license: llama3
base_model: unsloth/llama-3-8b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 95b025bb-5456-4569-9dbf-223c1bf753b9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-3-8b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ffaf56793ebfca53_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ffaf56793ebfca53_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: dzanbek/95b025bb-5456-4569-9dbf-223c1bf753b9
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/ffaf56793ebfca53_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 03c871f1-6934-4c7f-b6a5-3802c83267eb
wandb_project: 01-31
wandb_run: your_name
wandb_runid: 03c871f1-6934-4c7f-b6a5-3802c83267eb
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 95b025bb-5456-4569-9dbf-223c1bf753b9
This model is a fine-tuned version of [unsloth/llama-3-8b](https://huggingface.co/unsloth/llama-3-8b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0132 | 150 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Hartunka/tiny_bert_km_50_v2 | Hartunka | "2025-04-19T19:28:32Z" | 3 | 0 | null | [
"safetensors",
"distilbert",
"generated_from_trainer",
"dataset:Hartunka/processed_wikitext-103-raw-v1-km-50_v2",
"model-index",
"region:us"
] | null | "2025-04-14T12:07:16Z" | ---
tags:
- generated_from_trainer
datasets:
- Hartunka/processed_wikitext-103-raw-v1-km-50_v2
metrics:
- accuracy
model-index:
- name: tiny_bert_km_50_v2
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: Hartunka/processed_wikitext-103-raw-v1-km-50_v2
type: Hartunka/processed_wikitext-103-raw-v1-km-50_v2
metrics:
- name: Accuracy
type: accuracy
value: 0.15262473865626944
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny_bert_km_50_v2
This model is a fine-tuned version of [](https://huggingface.co/) on the Hartunka/processed_wikitext-103-raw-v1-km-50_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8757
- Accuracy: 0.1526
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|
| 6.9045 | 4.1982 | 10000 | 6.9532 | 0.1481 |
| 6.4983 | 8.3963 | 20000 | 6.8621 | 0.1524 |
| 6.3069 | 12.5945 | 30000 | 6.8769 | 0.1533 |
| 6.1769 | 16.7926 | 40000 | 6.9537 | 0.1523 |
| 6.0989 | 20.9908 | 50000 | 7.0162 | 0.1513 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.19.1
|
phospho-app/suffed_animal_v1-19nh42sk0d | phospho-app | "2025-04-19T19:27:16Z" | 0 | 0 | null | [
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"replicate",
"region:us"
] | null | "2025-04-19T18:54:38Z" |
---
tags:
- phosphobot
- gr00t
- replicate
task_categories:
- robotics
---
# Gr00t Model - phospho Replication Pipeline
This model was trained using **phospho's Replicate pipeline** for **gr00t models**.
Training parameters:
- **Dataset**: [Starkosaure/suffed_animal_v1](https://huggingface.co/datasets/Starkosaure/suffed_animal_v1)
- **Wandb run URL**: None
- **Epochs**: 20
- **Batch size**: 64
- **Training steps**: 1646
๐ **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=replicate_groot_training_pipeline)
๐ค **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=replicate_groot_training_pipeline)
๐ **Explore on Replicate**: [Replicate](https://replicate.com/phospho-app/gr00t-policy)
|
RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf | RichardErkhov | "2025-04-19T19:24:06Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-19T17:55:58Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
mp_mistral7bv3_sft_dpo_beta1e-1_epoch3 - GGUF
- Model creator: https://huggingface.co/yjwon/
- Original model: https://huggingface.co/yjwon/mp_mistral7bv3_sft_dpo_beta1e-1_epoch3/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q2_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q2_K.gguf) | Q2_K | 2.54GB |
| [mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.IQ3_XS.gguf) | IQ3_XS | 2.82GB |
| [mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.IQ3_S.gguf) | IQ3_S | 2.97GB |
| [mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q3_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q3_K.gguf) | Q3_K | 3.28GB |
| [mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.IQ4_XS.gguf) | IQ4_XS | 3.68GB |
| [mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q4_0.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q4_0.gguf) | Q4_0 | 3.83GB |
| [mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q4_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q4_K.gguf) | Q4_K | 4.07GB |
| [mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q4_1.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q4_1.gguf) | Q4_1 | 4.24GB |
| [mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q5_0.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q5_0.gguf) | Q5_0 | 4.66GB |
| [mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q5_K_S.gguf) | Q5_K_S | 4.66GB |
| [mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q5_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q5_K.gguf) | Q5_K | 4.78GB |
| [mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q5_1.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q5_1.gguf) | Q5_1 | 5.07GB |
| [mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q6_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q6_K.gguf) | Q6_K | 5.54GB |
| [mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q8_0.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Tigran010101/distilgpt2-finetuned-wikitext2 | Tigran010101 | "2025-04-19T19:22:26Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T19:05:46Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
ashimdahal/Ertugrul-Qwen2-VL-7B-Captioner-Relaxed_Qwen-Qwen2-VL-7B-Instruct | ashimdahal | "2025-04-19T19:22:22Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated-by-script",
"image-captioning",
"license:apache-2.0",
"region:us"
] | null | "2025-04-19T19:01:54Z" |
---
# Auto-generated fields, verify and update as needed
license: apache-2.0
tags:
- generated-by-script
- peft # Assume PEFT adapter unless explicitly a full model repo
- image-captioning # Add more specific task tags if applicable
base_model: [] # <-- FIXED: Provide empty list as default to satisfy validator
# - Ertugrul/Qwen2-VL-7B-Captioner-Relaxed # Heuristic guess for processor, VERIFY MANUALLY
# - Qwen/Qwen2-VL-7B-Instruct # Heuristic guess for decoder, VERIFY MANUALLY
---
# Model: ashimdahal/Ertugrul-Qwen2-VL-7B-Captioner-Relaxed_Qwen-Qwen2-VL-7B-Instruct
This repository contains model artifacts for a run named `Ertugrul-Qwen2-VL-7B-Captioner-Relaxed_Qwen-Qwen2-VL-7B-Instruct`, likely a PEFT adapter.
## Training Source
This model was trained as part of the project/codebase available at:
https://github.com/ashimdahal/captioning_image/blob/main
## Base Model Information (Heuristic)
* **Processor/Vision Encoder (Guessed):** `Ertugrul/Qwen2-VL-7B-Captioner-Relaxed`
* **Decoder/Language Model (Guessed):** `Qwen/Qwen2-VL-7B-Instruct`
**โ ๏ธ Important:** The `base_model` tag in the metadata above is initially empty. The models listed here are *heuristic guesses* based on the training directory name (`Ertugrul-Qwen2-VL-7B-Captioner-Relaxed_Qwen-Qwen2-VL-7B-Instruct`). Please verify these against your training configuration and update the `base_model:` list in the YAML metadata block at the top of this README with the correct Hugging Face model identifiers.
## How to Use (Example with PEFT)
```python
from transformers import AutoProcessor, AutoModelForVision2Seq, Blip2ForConditionalGeneration # Or other relevant classes
from peft import PeftModel, PeftConfig
import torch
# --- Configuration ---
# 1. Specify the EXACT base model identifiers used during training
base_processor_id = "Ertugrul/Qwen2-VL-7B-Captioner-Relaxed" # <-- Replace with correct HF ID
base_model_id = "Qwen/Qwen2-VL-7B-Instruct" # <-- Replace with correct HF ID (e.g., Salesforce/blip2-opt-2.7b)
# 2. Specify the PEFT adapter repository ID (this repo)
adapter_repo_id = "ashimdahal/Ertugrul-Qwen2-VL-7B-Captioner-Relaxed_Qwen-Qwen2-VL-7B-Instruct"
# --- Load Base Model and Processor ---
processor = AutoProcessor.from_pretrained(base_processor_id)
# Load the base model (ensure it matches the type used for training)
# Example for BLIP-2 OPT:
base_model = Blip2ForConditionalGeneration.from_pretrained(
base_model_id,
torch_dtype=torch.float16 # Or torch.bfloat16 or float32, match training/inference needs
)
# Or for other model types:
base_model = AutoModelForVision2Seq.from_pretrained(base_model_id, torch_dtype=torch.float16)
base_model = AutoModelForCausalLM
......
# --- Load PEFT Adapter ---
# Load the adapter config and merge the adapter weights into the base model
model = PeftModel.from_pretrained(base_model, adapter_repo_id)
model = model.merge_and_unload() # Merge weights for inference (optional but often recommended)
model.eval() # Set model to evaluation mode
# --- Inference Example ---
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
image = ... # Load your image (e.g., using PIL)
text = "a photo of" # Optional prompt start
inputs = processor(images=image, text=text, return_tensors="pt").to(device, torch.float16) # Match model dtype
generated_ids = model.generate(**inputs, max_new_tokens=50)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip()
print(f"Generated Caption: {{generated_text}}")
```
*More model-specific documentation, evaluation results, and usage examples should be added here.*
|
ashimdahal/microsoft-git-base_microsoft-git-base | ashimdahal | "2025-04-19T19:22:15Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated-by-script",
"image-captioning",
"license:apache-2.0",
"region:us"
] | null | "2025-04-19T19:01:29Z" |
---
# Auto-generated fields, verify and update as needed
license: apache-2.0
tags:
- generated-by-script
- peft # Assume PEFT adapter unless explicitly a full model repo
- image-captioning # Add more specific task tags if applicable
base_model: [] # <-- FIXED: Provide empty list as default to satisfy validator
# - microsoft/git-base # Heuristic guess for processor, VERIFY MANUALLY
# - microsoft/git-base # Heuristic guess for decoder, VERIFY MANUALLY
---
# Model: ashimdahal/microsoft-git-base_microsoft-git-base
This repository contains model artifacts for a run named `microsoft-git-base_microsoft-git-base`, likely a PEFT adapter.
## Training Source
This model was trained as part of the project/codebase available at:
https://github.com/ashimdahal/captioning_image/blob/main
## Base Model Information (Heuristic)
* **Processor/Vision Encoder (Guessed):** `microsoft/git-base`
* **Decoder/Language Model (Guessed):** `microsoft/git-base`
**โ ๏ธ Important:** The `base_model` tag in the metadata above is initially empty. The models listed here are *heuristic guesses* based on the training directory name (`microsoft-git-base_microsoft-git-base`). Please verify these against your training configuration and update the `base_model:` list in the YAML metadata block at the top of this README with the correct Hugging Face model identifiers.
## How to Use (Example with PEFT)
```python
from transformers import AutoProcessor, AutoModelForVision2Seq, Blip2ForConditionalGeneration # Or other relevant classes
from peft import PeftModel, PeftConfig
import torch
# --- Configuration ---
# 1. Specify the EXACT base model identifiers used during training
base_processor_id = "microsoft/git-base" # <-- Replace with correct HF ID
base_model_id = "microsoft/git-base" # <-- Replace with correct HF ID (e.g., Salesforce/blip2-opt-2.7b)
# 2. Specify the PEFT adapter repository ID (this repo)
adapter_repo_id = "ashimdahal/microsoft-git-base_microsoft-git-base"
# --- Load Base Model and Processor ---
processor = AutoProcessor.from_pretrained(base_processor_id)
# Load the base model (ensure it matches the type used for training)
# Example for BLIP-2 OPT:
base_model = Blip2ForConditionalGeneration.from_pretrained(
base_model_id,
torch_dtype=torch.float16 # Or torch.bfloat16 or float32, match training/inference needs
)
# Or for other model types:
base_model = AutoModelForVision2Seq.from_pretrained(base_model_id, torch_dtype=torch.float16)
base_model = AutoModelForCausalLM
......
# --- Load PEFT Adapter ---
# Load the adapter config and merge the adapter weights into the base model
model = PeftModel.from_pretrained(base_model, adapter_repo_id)
model = model.merge_and_unload() # Merge weights for inference (optional but often recommended)
model.eval() # Set model to evaluation mode
# --- Inference Example ---
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
image = ... # Load your image (e.g., using PIL)
text = "a photo of" # Optional prompt start
inputs = processor(images=image, text=text, return_tensors="pt").to(device, torch.float16) # Match model dtype
generated_ids = model.generate(**inputs, max_new_tokens=50)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip()
print(f"Generated Caption: {{generated_text}}")
```
*More model-specific documentation, evaluation results, and usage examples should be added here.*
|
ashimdahal/Salesforce-blip-image-captioning-base_Salesforce-blip-image-captioning-base | ashimdahal | "2025-04-19T19:22:11Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated-by-script",
"image-captioning",
"license:apache-2.0",
"region:us"
] | null | "2025-04-19T19:01:20Z" |
---
# Auto-generated fields, verify and update as needed
license: apache-2.0
tags:
- generated-by-script
- peft # Assume PEFT adapter unless explicitly a full model repo
- image-captioning # Add more specific task tags if applicable
base_model: [] # <-- FIXED: Provide empty list as default to satisfy validator
# - Salesforce/blip-image-captioning-base # Heuristic guess for processor, VERIFY MANUALLY
# - Salesforce/blip-image-captioning-base # Heuristic guess for decoder, VERIFY MANUALLY
---
# Model: ashimdahal/Salesforce-blip-image-captioning-base_Salesforce-blip-image-captioning-base
This repository contains model artifacts for a run named `Salesforce-blip-image-captioning-base_Salesforce-blip-image-captioning-base`, likely a PEFT adapter.
## Training Source
This model was trained as part of the project/codebase available at:
https://github.com/ashimdahal/captioning_image/blob/main
## Base Model Information (Heuristic)
* **Processor/Vision Encoder (Guessed):** `Salesforce/blip-image-captioning-base`
* **Decoder/Language Model (Guessed):** `Salesforce/blip-image-captioning-base`
**โ ๏ธ Important:** The `base_model` tag in the metadata above is initially empty. The models listed here are *heuristic guesses* based on the training directory name (`Salesforce-blip-image-captioning-base_Salesforce-blip-image-captioning-base`). Please verify these against your training configuration and update the `base_model:` list in the YAML metadata block at the top of this README with the correct Hugging Face model identifiers.
## How to Use (Example with PEFT)
```python
from transformers import AutoProcessor, AutoModelForVision2Seq, Blip2ForConditionalGeneration # Or other relevant classes
from peft import PeftModel, PeftConfig
import torch
# --- Configuration ---
# 1. Specify the EXACT base model identifiers used during training
base_processor_id = "Salesforce/blip-image-captioning-base" # <-- Replace with correct HF ID
base_model_id = "Salesforce/blip-image-captioning-base" # <-- Replace with correct HF ID (e.g., Salesforce/blip2-opt-2.7b)
# 2. Specify the PEFT adapter repository ID (this repo)
adapter_repo_id = "ashimdahal/Salesforce-blip-image-captioning-base_Salesforce-blip-image-captioning-base"
# --- Load Base Model and Processor ---
processor = AutoProcessor.from_pretrained(base_processor_id)
# Load the base model (ensure it matches the type used for training)
# Example for BLIP-2 OPT:
base_model = Blip2ForConditionalGeneration.from_pretrained(
base_model_id,
torch_dtype=torch.float16 # Or torch.bfloat16 or float32, match training/inference needs
)
# Or for other model types:
base_model = AutoModelForVision2Seq.from_pretrained(base_model_id, torch_dtype=torch.float16)
base_model = AutoModelForCausalLM
......
# --- Load PEFT Adapter ---
# Load the adapter config and merge the adapter weights into the base model
model = PeftModel.from_pretrained(base_model, adapter_repo_id)
model = model.merge_and_unload() # Merge weights for inference (optional but often recommended)
model.eval() # Set model to evaluation mode
# --- Inference Example ---
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
image = ... # Load your image (e.g., using PIL)
text = "a photo of" # Optional prompt start
inputs = processor(images=image, text=text, return_tensors="pt").to(device, torch.float16) # Match model dtype
generated_ids = model.generate(**inputs, max_new_tokens=50)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip()
print(f"Generated Caption: {{generated_text}}")
```
*More model-specific documentation, evaluation results, and usage examples should be added here.*
|
ashimdahal/meta-llama-Llama-3.2-11B-Vision-Instruct_meta-llama-Llama-3.2-11B-Vision-Instruct | ashimdahal | "2025-04-19T19:22:02Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated-by-script",
"image-captioning",
"license:apache-2.0",
"region:us"
] | null | "2025-04-19T19:00:59Z" |
---
# Auto-generated fields, verify and update as needed
license: apache-2.0
tags:
- generated-by-script
- peft # Assume PEFT adapter unless explicitly a full model repo
- image-captioning # Add more specific task tags if applicable
base_model: [] # <-- FIXED: Provide empty list as default to satisfy validator
# - meta/llama-Llama-3.2-11B-Vision-Instruct # Heuristic guess for processor, VERIFY MANUALLY
# - meta/llama-Llama-3.2-11B-Vision-Instruct # Heuristic guess for decoder, VERIFY MANUALLY
---
# Model: ashimdahal/meta-llama-Llama-3.2-11B-Vision-Instruct_meta-llama-Llama-3.2-11B-Vision-Instruct
This repository contains model artifacts for a run named `meta-llama-Llama-3.2-11B-Vision-Instruct_meta-llama-Llama-3.2-11B-Vision-Instruct`, likely a PEFT adapter.
## Training Source
This model was trained as part of the project/codebase available at:
https://github.com/ashimdahal/captioning_image/blob/main
## Base Model Information (Heuristic)
* **Processor/Vision Encoder (Guessed):** `meta/llama-Llama-3.2-11B-Vision-Instruct`
* **Decoder/Language Model (Guessed):** `meta/llama-Llama-3.2-11B-Vision-Instruct`
**โ ๏ธ Important:** The `base_model` tag in the metadata above is initially empty. The models listed here are *heuristic guesses* based on the training directory name (`meta-llama-Llama-3.2-11B-Vision-Instruct_meta-llama-Llama-3.2-11B-Vision-Instruct`). Please verify these against your training configuration and update the `base_model:` list in the YAML metadata block at the top of this README with the correct Hugging Face model identifiers.
## How to Use (Example with PEFT)
```python
from transformers import AutoProcessor, AutoModelForVision2Seq, Blip2ForConditionalGeneration # Or other relevant classes
from peft import PeftModel, PeftConfig
import torch
# --- Configuration ---
# 1. Specify the EXACT base model identifiers used during training
base_processor_id = "meta/llama-Llama-3.2-11B-Vision-Instruct" # <-- Replace with correct HF ID
base_model_id = "meta/llama-Llama-3.2-11B-Vision-Instruct" # <-- Replace with correct HF ID (e.g., Salesforce/blip2-opt-2.7b)
# 2. Specify the PEFT adapter repository ID (this repo)
adapter_repo_id = "ashimdahal/meta-llama-Llama-3.2-11B-Vision-Instruct_meta-llama-Llama-3.2-11B-Vision-Instruct"
# --- Load Base Model and Processor ---
processor = AutoProcessor.from_pretrained(base_processor_id)
# Load the base model (ensure it matches the type used for training)
# Example for BLIP-2 OPT:
base_model = Blip2ForConditionalGeneration.from_pretrained(
base_model_id,
torch_dtype=torch.float16 # Or torch.bfloat16 or float32, match training/inference needs
)
# Or for other model types:
base_model = AutoModelForVision2Seq.from_pretrained(base_model_id, torch_dtype=torch.float16)
base_model = AutoModelForCausalLM
......
# --- Load PEFT Adapter ---
# Load the adapter config and merge the adapter weights into the base model
model = PeftModel.from_pretrained(base_model, adapter_repo_id)
model = model.merge_and_unload() # Merge weights for inference (optional but often recommended)
model.eval() # Set model to evaluation mode
# --- Inference Example ---
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
image = ... # Load your image (e.g., using PIL)
text = "a photo of" # Optional prompt start
inputs = processor(images=image, text=text, return_tensors="pt").to(device, torch.float16) # Match model dtype
generated_ids = model.generate(**inputs, max_new_tokens=50)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip()
print(f"Generated Caption: {{generated_text}}")
```
*More model-specific documentation, evaluation results, and usage examples should be added here.*
|
ZMC2019/Qwen2.5-Math-7B-Instruct | ZMC2019 | "2025-04-19T19:19:09Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2409.12122",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T19:17:06Z" | ---
base_model: Qwen/Qwen2.5-Math-7B
language:
- en
pipeline_tag: text-generation
tags:
- chat
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct/blob/main/LICENSE
---
# Qwen2.5-Math-7B-Instruct
> [!Warning]
> <div align="center">
> <b>
> ๐จ Qwen2.5-Math mainly supports solving English and Chinese math problems through CoT and TIR. We do not recommend using this series of models for other tasks.
> </b>
> </div>
## Introduction
In August 2024, we released the first series of mathematical LLMs - [Qwen2-Math](https://qwenlm.github.io/blog/qwen2-math/) - of our Qwen family. A month later, we have upgraded it and open-sourced **Qwen2.5-Math** series, including base models **Qwen2.5-Math-1.5B/7B/72B**, instruction-tuned models **Qwen2.5-Math-1.5B/7B/72B-Instruct**, and mathematical reward model **Qwen2.5-Math-RM-72B**.
Unlike Qwen2-Math series which only supports using Chain-of-Thught (CoT) to solve English math problems, Qwen2.5-Math series is expanded to support using both CoT and Tool-integrated Reasoning (TIR) to solve math problems in both Chinese and English. The Qwen2.5-Math series models have achieved significant performance improvements compared to the Qwen2-Math series models on the Chinese and English mathematics benchmarks with CoT.

While CoT plays a vital role in enhancing the reasoning capabilities of LLMs, it faces challenges in achieving computational accuracy and handling complex mathematical or algorithmic reasoning tasks, such as finding the roots of a quadratic equation or computing the eigenvalues of a matrix. TIR can further improve the model's proficiency in precise computation, symbolic manipulation, and algorithmic manipulation. Qwen2.5-Math-1.5B/7B/72B-Instruct achieve 79.7, 85.3, and 87.8 respectively on the MATH benchmark using TIR.
## Model Details
For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen2.5-math/) and [GitHub repo](https://github.com/QwenLM/Qwen2.5-Math).
## Requirements
* `transformers>=4.37.0` for Qwen2.5-Math models. The latest version is recommended.
> [!Warning]
> <div align="center">
> <b>
> ๐จ This is a must because <code>transformers</code> integrated Qwen2 codes since <code>4.37.0</code>.
> </b>
> </div>
For requirements on GPU memory and the respective throughput, see similar results of Qwen2 [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Quick Start
> [!Important]
>
> **Qwen2.5-Math-7B-Instruct** is an instruction model for chatting;
>
> **Qwen2.5-Math-7B** is a base model typically used for completion and few-shot inference, serving as a better starting point for fine-tuning.
>
### ๐ค Hugging Face Transformers
Qwen2.5-Math can be deployed and infered in the same way as [Qwen2.5](https://github.com/QwenLM/Qwen2.5). Here we show a code snippet to show you how to use the chat model with `transformers`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-Math-7B-Instruct"
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Find the value of $x$ that satisfies the equation $4x+5 = 6x+7$."
# CoT
messages = [
{"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{}."},
{"role": "user", "content": prompt}
]
# TIR
messages = [
{"role": "system", "content": "Please integrate natural language reasoning with programs to solve the problem above, and put your final answer within \\boxed{}."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Citation
If you find our work helpful, feel free to give us a citation.
```
@article{yang2024qwen25mathtechnicalreportmathematical,
title={Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement},
author={An Yang and Beichen Zhang and Binyuan Hui and Bofei Gao and Bowen Yu and Chengpeng Li and Dayiheng Liu and Jianhong Tu and Jingren Zhou and Junyang Lin and Keming Lu and Mingfeng Xue and Runji Lin and Tianyu Liu and Xingzhang Ren and Zhenru Zhang},
journal={arXiv preprint arXiv:2409.12122},
year={2024}
}
``` |
MrRobotoAI/B12 | MrRobotoAI | "2025-04-19T19:17:37Z" | 50 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:MrRobotoAI/102",
"base_model:merge:MrRobotoAI/102",
"base_model:MrRobotoAI/105",
"base_model:merge:MrRobotoAI/105",
"base_model:MrRobotoAI/108",
"base_model:merge:MrRobotoAI/108",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-08T20:56:55Z" | ---
base_model:
- MrRobotoAI/108
- MrRobotoAI/105
- MrRobotoAI/102
library_name: transformers
tags:
- mergekit
- merge
---
# merge 13,822
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the Passthrough merge method.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/108](https://huggingface.co/MrRobotoAI/108)
* [MrRobotoAI/105](https://huggingface.co/MrRobotoAI/105)
* [MrRobotoAI/102](https://huggingface.co/MrRobotoAI/102)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: MrRobotoAI/102
layer_range: [0, 3]
- sources:
- model: MrRobotoAI/105
layer_range: [4, 28]
- sources:
- model: MrRobotoAI/108
layer_range: [29, 32]
merge_method: passthrough
dtype: float16
```
|
vmpsergio/388fced2-7194-4f76-84cc-5abb47ae505f | vmpsergio | "2025-04-19T19:14:23Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | "2025-04-19T18:16:11Z" | ---
library_name: peft
license: apache-2.0
base_model: teknium/OpenHermes-2.5-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 388fced2-7194-4f76-84cc-5abb47ae505f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: teknium/OpenHermes-2.5-Mistral-7B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0a97b13092c68341_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0a97b13092c68341_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vmpsergio/388fced2-7194-4f76-84cc-5abb47ae505f
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/0a97b13092c68341_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|im_end|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a34df82e-6929-46c2-aad9-f532243f79f7
wandb_project: 01-31
wandb_run: your_name
wandb_runid: a34df82e-6929-46c2-aad9-f532243f79f7
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 388fced2-7194-4f76-84cc-5abb47ae505f
This model is a fine-tuned version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0225 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
TOMFORD79/Candy_8 | TOMFORD79 | "2025-04-19T19:14:21Z" | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-04-19T18:35:51Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Vimax97/Florence-2-base-gpt4_captioner_v1 | Vimax97 | "2025-04-19T19:14:14Z" | 205 | 0 | transformers | [
"transformers",
"safetensors",
"florence2",
"text-generation",
"art",
"background",
"image-to-text",
"custom_code",
"en",
"base_model:microsoft/Florence-2-base-ft",
"base_model:finetune:microsoft/Florence-2-base-ft",
"license:mit",
"autotrain_compatible",
"region:us"
] | image-to-text | "2025-03-15T01:41:45Z" | ---
library_name: transformers
tags:
- art
- background
license: mit
language:
- en
base_model:
- microsoft/Florence-2-base-ft
pipeline_tag: image-to-text
---
<!-- Provide a longer summary of what this model is. -->
## Uses
GPT4-O Style captioner, finetuned version using florence-2-base-ft
### Direct Use
This model can be used to create gpt4-o styple captions.
### Out-of-Scope Use
- This model might not generate long-text descriptions as the context length is 1024.
- Linear scaling is applied to increase the context length, its effect was not measured!
## How to Get Started with the Model
```sh
# Load fine-tuned model and processor
import torch
from transformers import AutoModelForCausalLM, AutoProcessor
from PIL import Image
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
repo_name = "Vimax97/Florence-2-base-gpt4_captioner_v1"
model = AutoModelForCausalLM.from_pretrained(repo_name, trust_remote_code=True).to(device)
processor = AutoProcessor.from_pretrained(repo_name, trust_remote_code=True)
# Inference
image = Image.open("<path_to_image>")
prompt = "<ImageCAP>" + 'What is the <GPT4> style description for this image?'
inputs = processor(text=prompt, images=image, return_tensors="pt").to(device)
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=1024,
do_sample=False,
num_beams=3
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task=prompt, image_size=(image.width, image.height))
print("Generated: ",parsed_answer[prompt])
```
#### Training Hyperparameters
- **Training regime:** fp32 precision, 1000 images were used, 1 epoch of finetuning
#### Summary
|
mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-i1-GGUF | mradermacher | "2025-04-19T19:14:03Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Nexesenex/Llama_3.x_70b_SmarTricks_v1.30_flat",
"base_model:quantized:Nexesenex/Llama_3.x_70b_SmarTricks_v1.30_flat",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-04-18T23:58:05Z" | ---
base_model: Nexesenex/Llama_3.x_70b_SmarTricks_v1.30_flat
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Nexesenex/Llama_3.x_70b_SmarTricks_v1.30_flat
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-i1-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.30_flat.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-i1-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.30_flat.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-i1-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.30_flat.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-i1-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.30_flat.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-i1-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.30_flat.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-i1-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.30_flat.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-i1-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.30_flat.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-i1-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.30_flat.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-i1-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.30_flat.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-i1-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.30_flat.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-i1-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.30_flat.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-i1-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.30_flat.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-i1-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.30_flat.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-i1-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.30_flat.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-i1-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.30_flat.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-i1-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.30_flat.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-i1-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.30_flat.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-i1-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.30_flat.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-i1-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.30_flat.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-i1-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.30_flat.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-i1-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.30_flat.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-i1-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.30_flat.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-i1-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.30_flat.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.30_flat-i1-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.30_flat.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
TOMFORD79/Candy_7 | TOMFORD79 | "2025-04-19T19:13:42Z" | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-04-19T18:35:41Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
arjunsama/viper_ep3 | arjunsama | "2025-04-19T19:12:22Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-04-19T19:11:43Z" | ---
base_model: unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** arjunsama
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dx2102/llama-midi | dx2102 | "2025-04-19T19:09:09Z" | 287 | 4 | null | [
"safetensors",
"llama",
"dataset:amaai-lab/MidiCaps",
"dataset:projectlosangeles/Los-Angeles-MIDI-Dataset",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"region:us"
] | null | "2025-02-11T05:13:51Z" | ---
datasets:
- amaai-lab/MidiCaps
- projectlosangeles/Los-Angeles-MIDI-Dataset
base_model:
- meta-llama/Llama-3.2-1B-Instruct
---
### Write music scores with llama
### Try the model online: https://huggingface.co/spaces/dx2102/llama-midi
This model is finetuned from the `Llama-3.2-1B` language model.
It learns to write MIDI music scores with a text representation.
Optionally, the score title can also be used as a text prompt.
To use this model, you can simply take existing code and replace `meta-llama/Llama-3.2-1B` with `dx2102/llama-midi`.
```python
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="dx2102/llama-midi",
torch_dtype=torch.bfloat16,
device="cuda", # cuda/mps/cpu
)
txt = pipe(
'''
Bach
pitch duration wait velocity instrument
'''.strip(),
max_length=100,
temperature=1.0,
top_p=1.0,
)
print(txt)
```
To convert the text representation back to a midi file, try this:
```bash
# install this midi library
pip install symusic
```
[symusic](https://github.com/Yikai-Liao/symusic) is a fast C++/Python library for efficient MIDI manipulation.
```python
import symusic
# For example
txt = '''pitch duration wait velocity instrument
71 1310 0 20 0
48 330 350 20 0
55 330 350 20 0
64 1310 690 20 0
74 660 690 20 0
69 1310 0 20 0
48 330 350 20 0
57 330 350 20 0
66 1310 690 20 0
67 330 350 20 0
69 330 350 20 0
71 1310 0 20 0
48 330 350 20 0
55 330 350 20 0
64 1310 690 20 0
74 660 690 20 0
69 1970 0 20 0
48 330 350 20 0
'''
def postprocess(txt, path):
# assert txt.startswith(prompt)
txt = txt.split('\n\n')[-1]
tracks = {}
now = 0
# we need to ignore the invalid output by the model
try:
for line in txt.split('\n'):
pitch, duration, wait, velocity, instrument = line.split()
pitch, duration, wait, velocity = [int(x) for x in [pitch, duration, wait, velocity]]
if instrument not in tracks:
tracks[instrument] = symusic.core.TrackSecond()
if instrument != 'drum':
tracks[instrument].program = int(instrument)
else:
tracks[instrument].is_drum = True
# Eg. Note(time=7.47, duration=5.25, pitch=43, velocity=64, ttype='Second')
tracks[instrument].notes.append(symusic.core.NoteSecond(
time=now/1000,
duration=duration/1000,
pitch=int(pitch),
velocity=int(velocity * 4),
))
now += wait
except Exception as e:
print('Postprocess: Ignored error:', e)
print(f'Postprocess: Got {sum(len(track.notes) for track in tracks.values())} notes')
try:
score = symusic.Score(ttype='Second')
score.tracks.extend(tracks.values())
score.dump_midi(path)
except Exception as e:
print('Postprocess: Ignored postprocessing error:', e)
postprocess(txt, './result.mid')
```
|
neural-coder/llama-ape-finetuned-6 | neural-coder | "2025-04-19T19:06:04Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"base_model:Team-ACE/ToolACE-2-Llama-3.1-8B",
"base_model:finetune:Team-ACE/ToolACE-2-Llama-3.1-8B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T17:45:38Z" | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: Team-ACE/ToolACE-2-Llama-3.1-8B
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
hardlyworking/Ramen-12B-Q4_K_S-GGUF | hardlyworking | "2025-04-19T19:05:06Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:hardlyworking/Ramen-12B",
"base_model:quantized:hardlyworking/Ramen-12B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-19T19:04:35Z" | ---
base_model: hardlyworking/Ramen-12B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# hardlyworking/Ramen-12B-Q4_K_S-GGUF
This model was converted to GGUF format from [`hardlyworking/Ramen-12B`](https://huggingface.co/hardlyworking/Ramen-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/hardlyworking/Ramen-12B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo hardlyworking/Ramen-12B-Q4_K_S-GGUF --hf-file ramen-12b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo hardlyworking/Ramen-12B-Q4_K_S-GGUF --hf-file ramen-12b-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo hardlyworking/Ramen-12B-Q4_K_S-GGUF --hf-file ramen-12b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo hardlyworking/Ramen-12B-Q4_K_S-GGUF --hf-file ramen-12b-q4_k_s.gguf -c 2048
```
|
TOMFORD79/Candy_5 | TOMFORD79 | "2025-04-19T19:01:57Z" | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-04-19T18:34:34Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
CyberGhostAlbert/realistic-monster | CyberGhostAlbert | "2025-04-19T19:01:22Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-19T18:15:46Z" | # Realistic Vision Monster Version
Only main .safetensors file preserved for Monster API compatibility. |
KHAOULA-KH/CAR_DOMMAGE_CPU_MODEL | KHAOULA-KH | "2025-04-19T19:00:29Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-04-19T19:00:29Z" | ---
license: apache-2.0
---
|
ykarout/phi-4-deepseek-r1-distilled-fp16 | ykarout | "2025-04-19T18:57:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/phi-4-unsloth-bnb-4bit",
"base_model:finetune:unsloth/phi-4-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T18:36:51Z" | ---
base_model: unsloth/phi-4-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ykarout
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jobbsteh/ppo-LunarLander-v2 | Jobbsteh | "2025-04-19T18:57:25Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2025-04-19T18:56:38Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.81 +/- 17.01
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BlcaCola/YI-AI-Chinese-4B-it-V1-Q6_K-GGUF | BlcaCola | "2025-04-19T18:56:47Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:BlcaCola/YI-AI-Chinese-4B-it-V1",
"base_model:quantized:BlcaCola/YI-AI-Chinese-4B-it-V1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-19T18:56:30Z" | ---
base_model: BlcaCola/YI-AI-Chinese-4B-it-V1
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# BlcaCola/YI-AI-Chinese-4B-it-V1-Q6_K-GGUF
This model was converted to GGUF format from [`BlcaCola/YI-AI-Chinese-4B-it-V1`](https://huggingface.co/BlcaCola/YI-AI-Chinese-4B-it-V1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/BlcaCola/YI-AI-Chinese-4B-it-V1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo BlcaCola/YI-AI-Chinese-4B-it-V1-Q6_K-GGUF --hf-file yi-ai-chinese-4b-it-v1-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo BlcaCola/YI-AI-Chinese-4B-it-V1-Q6_K-GGUF --hf-file yi-ai-chinese-4b-it-v1-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo BlcaCola/YI-AI-Chinese-4B-it-V1-Q6_K-GGUF --hf-file yi-ai-chinese-4b-it-v1-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo BlcaCola/YI-AI-Chinese-4B-it-V1-Q6_K-GGUF --hf-file yi-ai-chinese-4b-it-v1-q6_k.gguf -c 2048
```
|
Haitao999/Qwen2.5-7B-Instruct-EMPO-natural_reasoning_simple-0419 | Haitao999 | "2025-04-19T18:55:59Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:qingyangzhang/natural_reasoning_simple",
"arxiv:2402.03300",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T05:39:43Z" | ---
datasets: qingyangzhang/natural_reasoning_simple
library_name: transformers
model_name: Qwen2.5-7B-Instruct-EMPO-natural_reasoning_simple-0419
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-7B-Instruct-EMPO-natural_reasoning_simple-0419
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [qingyangzhang/natural_reasoning_simple](https://huggingface.co/datasets/qingyangzhang/natural_reasoning_simple) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Haitao999/Qwen2.5-7B-Instruct-EMPO-natural_reasoning_simple-0419", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/tjucsailab/huggingface/runs/9ddax7nu)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.14.0
- Transformers: 4.48.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.1.0
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
BlcaCola/YI-AI-Chinese-4B-it-V1-Q5_0-GGUF | BlcaCola | "2025-04-19T18:55:11Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:BlcaCola/YI-AI-Chinese-4B-it-V1",
"base_model:quantized:BlcaCola/YI-AI-Chinese-4B-it-V1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-19T18:54:54Z" | ---
base_model: BlcaCola/YI-AI-Chinese-4B-it-V1
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# BlcaCola/YI-AI-Chinese-4B-it-V1-Q5_0-GGUF
This model was converted to GGUF format from [`BlcaCola/YI-AI-Chinese-4B-it-V1`](https://huggingface.co/BlcaCola/YI-AI-Chinese-4B-it-V1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/BlcaCola/YI-AI-Chinese-4B-it-V1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo BlcaCola/YI-AI-Chinese-4B-it-V1-Q5_0-GGUF --hf-file yi-ai-chinese-4b-it-v1-q5_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo BlcaCola/YI-AI-Chinese-4B-it-V1-Q5_0-GGUF --hf-file yi-ai-chinese-4b-it-v1-q5_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo BlcaCola/YI-AI-Chinese-4B-it-V1-Q5_0-GGUF --hf-file yi-ai-chinese-4b-it-v1-q5_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo BlcaCola/YI-AI-Chinese-4B-it-V1-Q5_0-GGUF --hf-file yi-ai-chinese-4b-it-v1-q5_0.gguf -c 2048
```
|
rbelanec/train_cola_1744902678 | rbelanec | "2025-04-19T18:53:49Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lntuning",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | "2025-04-19T12:22:03Z" | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.3
tags:
- llama-factory
- lntuning
- generated_from_trainer
model-index:
- name: train_cola_1744902678
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cola_1744902678
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on the cola dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1416
- Num Input Tokens Seen: 28700680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- training_steps: 40000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-------:|:-----:|:---------------:|:-----------------:|
| 0.4054 | 0.4158 | 200 | 0.3175 | 143936 |
| 0.2079 | 0.8316 | 400 | 0.2186 | 287392 |
| 0.1714 | 1.2474 | 600 | 0.2035 | 430968 |
| 0.2061 | 1.6632 | 800 | 0.1944 | 574456 |
| 0.2034 | 2.0790 | 1000 | 0.1925 | 718448 |
| 0.1608 | 2.4948 | 1200 | 0.1897 | 862224 |
| 0.2277 | 2.9106 | 1400 | 0.1841 | 1004880 |
| 0.1451 | 3.3264 | 1600 | 0.1766 | 1148296 |
| 0.1774 | 3.7422 | 1800 | 0.1750 | 1292616 |
| 0.1937 | 4.1580 | 2000 | 0.1750 | 1436240 |
| 0.115 | 4.5738 | 2200 | 0.1730 | 1579408 |
| 0.1229 | 4.9896 | 2400 | 0.1743 | 1723056 |
| 0.1039 | 5.4054 | 2600 | 0.1655 | 1866504 |
| 0.1567 | 5.8212 | 2800 | 0.1636 | 2009832 |
| 0.1797 | 6.2370 | 3000 | 0.1641 | 2153504 |
| 0.1581 | 6.6528 | 3200 | 0.1661 | 2296672 |
| 0.1829 | 7.0686 | 3400 | 0.1634 | 2440240 |
| 0.1354 | 7.4844 | 3600 | 0.1612 | 2583952 |
| 0.1195 | 7.9002 | 3800 | 0.1590 | 2727536 |
| 0.1278 | 8.3160 | 4000 | 0.1570 | 2870176 |
| 0.1559 | 8.7318 | 4200 | 0.1623 | 3013792 |
| 0.1162 | 9.1476 | 4400 | 0.1586 | 3157976 |
| 0.1551 | 9.5634 | 4600 | 0.1591 | 3301400 |
| 0.146 | 9.9792 | 4800 | 0.1551 | 3445528 |
| 0.1104 | 10.3950 | 5000 | 0.1562 | 3588176 |
| 0.15 | 10.8108 | 5200 | 0.1569 | 3731888 |
| 0.1356 | 11.2266 | 5400 | 0.1554 | 3876072 |
| 0.2153 | 11.6424 | 5600 | 0.1566 | 4020200 |
| 0.1705 | 12.0582 | 5800 | 0.1565 | 4162880 |
| 0.1616 | 12.4740 | 6000 | 0.1523 | 4305664 |
| 0.0836 | 12.8898 | 6200 | 0.1557 | 4449504 |
| 0.146 | 13.3056 | 6400 | 0.1483 | 4592824 |
| 0.136 | 13.7214 | 6600 | 0.1522 | 4737208 |
| 0.1068 | 14.1372 | 6800 | 0.1503 | 4880104 |
| 0.1307 | 14.5530 | 7000 | 0.1545 | 5024232 |
| 0.1475 | 14.9688 | 7200 | 0.1489 | 5167336 |
| 0.1159 | 15.3846 | 7400 | 0.1476 | 5311512 |
| 0.1145 | 15.8004 | 7600 | 0.1501 | 5454712 |
| 0.1116 | 16.2162 | 7800 | 0.1580 | 5598576 |
| 0.1438 | 16.6320 | 8000 | 0.1547 | 5741776 |
| 0.1108 | 17.0478 | 8200 | 0.1530 | 5885896 |
| 0.1097 | 17.4636 | 8400 | 0.1456 | 6030472 |
| 0.123 | 17.8794 | 8600 | 0.1486 | 6172872 |
| 0.1569 | 18.2952 | 8800 | 0.1473 | 6316224 |
| 0.1355 | 18.7110 | 9000 | 0.1541 | 6460064 |
| 0.1568 | 19.1268 | 9200 | 0.1444 | 6603384 |
| 0.1126 | 19.5426 | 9400 | 0.1453 | 6746616 |
| 0.0971 | 19.9584 | 9600 | 0.1459 | 6890808 |
| 0.1144 | 20.3742 | 9800 | 0.1509 | 7033840 |
| 0.1154 | 20.7900 | 10000 | 0.1472 | 7177136 |
| 0.1629 | 21.2058 | 10200 | 0.1470 | 7320168 |
| 0.114 | 21.6216 | 10400 | 0.1500 | 7464136 |
| 0.1185 | 22.0374 | 10600 | 0.1449 | 7607816 |
| 0.1286 | 22.4532 | 10800 | 0.1437 | 7751560 |
| 0.1344 | 22.8690 | 11000 | 0.1511 | 7895400 |
| 0.0899 | 23.2848 | 11200 | 0.1432 | 8038480 |
| 0.0867 | 23.7006 | 11400 | 0.1457 | 8182416 |
| 0.1388 | 24.1164 | 11600 | 0.1501 | 8325888 |
| 0.1396 | 24.5322 | 11800 | 0.1527 | 8468992 |
| 0.0853 | 24.9480 | 12000 | 0.1477 | 8612096 |
| 0.098 | 25.3638 | 12200 | 0.1427 | 8756152 |
| 0.1308 | 25.7796 | 12400 | 0.1466 | 8899640 |
| 0.1043 | 26.1954 | 12600 | 0.1494 | 9042656 |
| 0.1072 | 26.6112 | 12800 | 0.1439 | 9186656 |
| 0.1031 | 27.0270 | 13000 | 0.1476 | 9329688 |
| 0.1083 | 27.4428 | 13200 | 0.1420 | 9472184 |
| 0.1044 | 27.8586 | 13400 | 0.1510 | 9616056 |
| 0.0876 | 28.2744 | 13600 | 0.1452 | 9759824 |
| 0.0652 | 28.6902 | 13800 | 0.1463 | 9903824 |
| 0.1238 | 29.1060 | 14000 | 0.1438 | 10046680 |
| 0.0927 | 29.5218 | 14200 | 0.1438 | 10190040 |
| 0.1054 | 29.9376 | 14400 | 0.1492 | 10333816 |
| 0.1422 | 30.3534 | 14600 | 0.1447 | 10476752 |
| 0.1203 | 30.7692 | 14800 | 0.1501 | 10620240 |
| 0.1145 | 31.1850 | 15000 | 0.1417 | 10763368 |
| 0.0727 | 31.6008 | 15200 | 0.1448 | 10906568 |
| 0.1571 | 32.0166 | 15400 | 0.1494 | 11049768 |
| 0.0968 | 32.4324 | 15600 | 0.1504 | 11193256 |
| 0.0854 | 32.8482 | 15800 | 0.1446 | 11336648 |
| 0.0739 | 33.2640 | 16000 | 0.1454 | 11481080 |
| 0.0903 | 33.6798 | 16200 | 0.1439 | 11624376 |
| 0.0906 | 34.0956 | 16400 | 0.1429 | 11766832 |
| 0.1062 | 34.5114 | 16600 | 0.1463 | 11910672 |
| 0.1066 | 34.9272 | 16800 | 0.1444 | 12054512 |
| 0.1179 | 35.3430 | 17000 | 0.1451 | 12198464 |
| 0.1434 | 35.7588 | 17200 | 0.1438 | 12341536 |
| 0.1222 | 36.1746 | 17400 | 0.1431 | 12485368 |
| 0.1897 | 36.5904 | 17600 | 0.1429 | 12629496 |
| 0.1307 | 37.0062 | 17800 | 0.1425 | 12772208 |
| 0.1357 | 37.4220 | 18000 | 0.1439 | 12915888 |
| 0.151 | 37.8378 | 18200 | 0.1416 | 13058896 |
| 0.102 | 38.2536 | 18400 | 0.1416 | 13201856 |
| 0.1296 | 38.6694 | 18600 | 0.1456 | 13344736 |
| 0.142 | 39.0852 | 18800 | 0.1468 | 13489016 |
| 0.0924 | 39.5010 | 19000 | 0.1510 | 13632312 |
| 0.0935 | 39.9168 | 19200 | 0.1454 | 13775960 |
| 0.118 | 40.3326 | 19400 | 0.1424 | 13918888 |
| 0.0833 | 40.7484 | 19600 | 0.1499 | 14062184 |
| 0.1225 | 41.1642 | 19800 | 0.1418 | 14206632 |
| 0.1059 | 41.5800 | 20000 | 0.1488 | 14349800 |
| 0.1191 | 41.9958 | 20200 | 0.1456 | 14493096 |
| 0.0844 | 42.4116 | 20400 | 0.1424 | 14636824 |
| 0.094 | 42.8274 | 20600 | 0.1445 | 14780056 |
| 0.0911 | 43.2432 | 20800 | 0.1470 | 14922952 |
| 0.1289 | 43.6590 | 21000 | 0.1469 | 15066120 |
| 0.1489 | 44.0748 | 21200 | 0.1436 | 15209536 |
| 0.094 | 44.4906 | 21400 | 0.1433 | 15353920 |
| 0.1047 | 44.9064 | 21600 | 0.1430 | 15497376 |
| 0.1176 | 45.3222 | 21800 | 0.1418 | 15641208 |
| 0.0974 | 45.7380 | 22000 | 0.1444 | 15784536 |
| 0.0903 | 46.1538 | 22200 | 0.1457 | 15928528 |
| 0.0802 | 46.5696 | 22400 | 0.1422 | 16072048 |
| 0.0948 | 46.9854 | 22600 | 0.1437 | 16214832 |
| 0.0711 | 47.4012 | 22800 | 0.1448 | 16358208 |
| 0.1001 | 47.8170 | 23000 | 0.1448 | 16501568 |
| 0.0753 | 48.2328 | 23200 | 0.1461 | 16645480 |
| 0.1133 | 48.6486 | 23400 | 0.1431 | 16789224 |
| 0.1046 | 49.0644 | 23600 | 0.1509 | 16932768 |
| 0.0668 | 49.4802 | 23800 | 0.1451 | 17076672 |
| 0.1376 | 49.8960 | 24000 | 0.1443 | 17220000 |
| 0.0919 | 50.3119 | 24200 | 0.1426 | 17363816 |
| 0.0665 | 50.7277 | 24400 | 0.1423 | 17508072 |
| 0.117 | 51.1435 | 24600 | 0.1501 | 17651488 |
| 0.0967 | 51.5593 | 24800 | 0.1453 | 17795328 |
| 0.1266 | 51.9751 | 25000 | 0.1447 | 17938368 |
| 0.0748 | 52.3909 | 25200 | 0.1443 | 18081176 |
| 0.1336 | 52.8067 | 25400 | 0.1453 | 18224696 |
| 0.0805 | 53.2225 | 25600 | 0.1442 | 18369136 |
| 0.0733 | 53.6383 | 25800 | 0.1437 | 18511824 |
| 0.0814 | 54.0541 | 26000 | 0.1432 | 18655008 |
| 0.0856 | 54.4699 | 26200 | 0.1490 | 18798592 |
| 0.1183 | 54.8857 | 26400 | 0.1463 | 18942016 |
| 0.1266 | 55.3015 | 26600 | 0.1465 | 19085296 |
| 0.0854 | 55.7173 | 26800 | 0.1458 | 19229616 |
| 0.0836 | 56.1331 | 27000 | 0.1454 | 19373160 |
| 0.1123 | 56.5489 | 27200 | 0.1431 | 19516200 |
| 0.1217 | 56.9647 | 27400 | 0.1463 | 19659656 |
| 0.1149 | 57.3805 | 27600 | 0.1426 | 19803672 |
| 0.0753 | 57.7963 | 27800 | 0.1456 | 19947800 |
| 0.0848 | 58.2121 | 28000 | 0.1492 | 20090864 |
| 0.0713 | 58.6279 | 28200 | 0.1445 | 20234160 |
| 0.1056 | 59.0437 | 28400 | 0.1473 | 20378152 |
| 0.0931 | 59.4595 | 28600 | 0.1459 | 20521096 |
| 0.0841 | 59.8753 | 28800 | 0.1458 | 20664744 |
| 0.1066 | 60.2911 | 29000 | 0.1450 | 20808544 |
| 0.0863 | 60.7069 | 29200 | 0.1434 | 20952064 |
| 0.1233 | 61.1227 | 29400 | 0.1470 | 21095536 |
| 0.1196 | 61.5385 | 29600 | 0.1437 | 21239216 |
| 0.0911 | 61.9543 | 29800 | 0.1448 | 21382704 |
| 0.0734 | 62.3701 | 30000 | 0.1442 | 21526584 |
| 0.143 | 62.7859 | 30200 | 0.1455 | 21670744 |
| 0.0983 | 63.2017 | 30400 | 0.1443 | 21813952 |
| 0.1579 | 63.6175 | 30600 | 0.1440 | 21956992 |
| 0.0536 | 64.0333 | 30800 | 0.1433 | 22100720 |
| 0.1065 | 64.4491 | 31000 | 0.1453 | 22244240 |
| 0.1196 | 64.8649 | 31200 | 0.1440 | 22388368 |
| 0.132 | 65.2807 | 31400 | 0.1444 | 22531840 |
| 0.0858 | 65.6965 | 31600 | 0.1459 | 22674688 |
| 0.0828 | 66.1123 | 31800 | 0.1433 | 22817880 |
| 0.1095 | 66.5281 | 32000 | 0.1442 | 22962360 |
| 0.0726 | 66.9439 | 32200 | 0.1449 | 23105624 |
| 0.1103 | 67.3597 | 32400 | 0.1468 | 23248272 |
| 0.086 | 67.7755 | 32600 | 0.1448 | 23391888 |
| 0.1045 | 68.1913 | 32800 | 0.1429 | 23535616 |
| 0.0687 | 68.6071 | 33000 | 0.1447 | 23678976 |
| 0.0791 | 69.0229 | 33200 | 0.1453 | 23823128 |
| 0.0906 | 69.4387 | 33400 | 0.1446 | 23966488 |
| 0.1076 | 69.8545 | 33600 | 0.1448 | 24110648 |
| 0.0866 | 70.2703 | 33800 | 0.1435 | 24253072 |
| 0.1197 | 70.6861 | 34000 | 0.1448 | 24396528 |
| 0.1497 | 71.1019 | 34200 | 0.1453 | 24540040 |
| 0.1028 | 71.5177 | 34400 | 0.1451 | 24683144 |
| 0.0874 | 71.9335 | 34600 | 0.1458 | 24827048 |
| 0.1154 | 72.3493 | 34800 | 0.1451 | 24970840 |
| 0.0979 | 72.7651 | 35000 | 0.1455 | 25115672 |
| 0.0703 | 73.1809 | 35200 | 0.1441 | 25258416 |
| 0.1256 | 73.5967 | 35400 | 0.1443 | 25402448 |
| 0.1286 | 74.0125 | 35600 | 0.1445 | 25545128 |
| 0.1168 | 74.4283 | 35800 | 0.1453 | 25688392 |
| 0.1085 | 74.8441 | 36000 | 0.1453 | 25831720 |
| 0.1001 | 75.2599 | 36200 | 0.1453 | 25975928 |
| 0.0624 | 75.6757 | 36400 | 0.1443 | 26119704 |
| 0.0936 | 76.0915 | 36600 | 0.1454 | 26262696 |
| 0.0826 | 76.5073 | 36800 | 0.1442 | 26406024 |
| 0.0844 | 76.9231 | 37000 | 0.1469 | 26550088 |
| 0.0912 | 77.3389 | 37200 | 0.1445 | 26693856 |
| 0.1002 | 77.7547 | 37400 | 0.1461 | 26837120 |
| 0.0781 | 78.1705 | 37600 | 0.1451 | 26980600 |
| 0.0805 | 78.5863 | 37800 | 0.1449 | 27124888 |
| 0.0633 | 79.0021 | 38000 | 0.1443 | 27266800 |
| 0.089 | 79.4179 | 38200 | 0.1453 | 27410736 |
| 0.1174 | 79.8337 | 38400 | 0.1455 | 27553360 |
| 0.0652 | 80.2495 | 38600 | 0.1453 | 27696864 |
| 0.1045 | 80.6653 | 38800 | 0.1448 | 27839840 |
| 0.0912 | 81.0811 | 39000 | 0.1449 | 27983384 |
| 0.1128 | 81.4969 | 39200 | 0.1453 | 28127512 |
| 0.0817 | 81.9127 | 39400 | 0.1452 | 28270104 |
| 0.0773 | 82.3285 | 39600 | 0.1458 | 28413680 |
| 0.0538 | 82.7443 | 39800 | 0.1474 | 28557552 |
| 0.0847 | 83.1601 | 40000 | 0.1452 | 28700680 |
### Framework versions
- PEFT 0.15.1
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
dzanbek/ff9ee20c-15de-4a6f-bfff-f52916894ac0 | dzanbek | "2025-04-19T18:53:48Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-04-19T18:33:51Z" | ---
library_name: peft
license: mit
base_model: HuggingFaceH4/zephyr-7b-beta
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ff9ee20c-15de-4a6f-bfff-f52916894ac0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: HuggingFaceH4/zephyr-7b-beta
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0b33416fff5c7d04_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0b33416fff5c7d04_train_data.json
type:
field_input: recipe
field_instruction: title
field_output: classification_result
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: dzanbek/ff9ee20c-15de-4a6f-bfff-f52916894ac0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/0b33416fff5c7d04_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ee374f63-9763-4ef7-b53d-cd993040de9f
wandb_project: 01-31
wandb_run: your_name
wandb_runid: ee374f63-9763-4ef7-b53d-cd993040de9f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ff9ee20c-15de-4a6f-bfff-f52916894ac0
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0758 | 150 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
BlcaCola/YI-AI-Chinese-4B-it-V1-Q4_0-GGUF | BlcaCola | "2025-04-19T18:53:24Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:BlcaCola/YI-AI-Chinese-4B-it-V1",
"base_model:quantized:BlcaCola/YI-AI-Chinese-4B-it-V1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-19T18:53:11Z" | ---
base_model: BlcaCola/YI-AI-Chinese-4B-it-V1
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# BlcaCola/YI-AI-Chinese-4B-it-V1-Q4_0-GGUF
This model was converted to GGUF format from [`BlcaCola/YI-AI-Chinese-4B-it-V1`](https://huggingface.co/BlcaCola/YI-AI-Chinese-4B-it-V1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/BlcaCola/YI-AI-Chinese-4B-it-V1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo BlcaCola/YI-AI-Chinese-4B-it-V1-Q4_0-GGUF --hf-file yi-ai-chinese-4b-it-v1-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo BlcaCola/YI-AI-Chinese-4B-it-V1-Q4_0-GGUF --hf-file yi-ai-chinese-4b-it-v1-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo BlcaCola/YI-AI-Chinese-4B-it-V1-Q4_0-GGUF --hf-file yi-ai-chinese-4b-it-v1-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo BlcaCola/YI-AI-Chinese-4B-it-V1-Q4_0-GGUF --hf-file yi-ai-chinese-4b-it-v1-q4_0.gguf -c 2048
```
|
LyliaEngine/Power_Puff_MixLora | LyliaEngine | "2025-04-19T18:52:50Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:LyliaEngine/ilustmix_v55",
"base_model:adapter:LyliaEngine/ilustmix_v55",
"license:cdla-permissive-2.0",
"region:us"
] | text-to-image | "2025-04-19T18:51:24Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
8K, depth of field, focused subject, dynamic angle, sexy pose, best quality,
detailed eyes, perfect eyes, realistic eyes, short blonde bob cut, (pink
highlights), blue eyes, (mascara), makeup, slim body, maid cap, black
stockings, small ass, medium breast, realistic breast, ruffled maid dress,
(black dress), long sleeves, white apron, black ruffled skirt, BREAK, lying
on the side, legs spread, front view, looking at viewer, movie perspective,
fractal background, abstract background, dynamic angle,
<lora:MoriiMee_Gothic_Niji_Style_Illustrious_r1:0.5> artist:moriimee,
<lora:PowerPuffMixLora:0.6>,
parameters:
negative_prompt: >-
bed, shine, (worst quality, low quality, sketch:1.1),error, bad anatomy,
bad hands, watermark, ugly, distorted, censored, lowers, multiple views,
signature, 3D,
output:
url: images/2-txt2img-20250224-163218-332824275.png
- text: >-
8K, depth of field, focused subject, dynamic angle, sexy pose, best quality,
detailed eyes, perfect eyes, realistic eyes, white hair, long hair, rainbow
highlights, blunt bangs, thick eyebrows, black eyebrows, brown eyes,
mascara, makeup, pink lips, parted lips, fit body, bare shoulders, yellow
sundress, long sundress, revealing sundress, small ass, medium breast,
realistic breast, braless, , BREAK, sitting, leaning forward slightly, front
view, looking at viewer, movie perspective, fractal background, abstract
background, dynamic angle, <lora:PowerPuffMixLora:0.6>,
parameters:
negative_prompt: >-
bed, shine, (worst quality, low quality, sketch:1.1),error, bad anatomy,
bad hands, watermark, ugly, distorted, censored, lowers, multiple views,
signature, 3D,
output:
url: images/0-txt2img-20250224-162709-2083372524.png
- text: >-
8K, depth of field, focused subject, dynamic angle, sexy pose, best quality,
detailed eyes, perfect eyes, realistic eyes, white hair, long hair, blue
eyes, (black eyeliner), (freckles), slim body, sunhat, white sandals, small
ass, small breast, realistic breast, (floral summer dress), off-shoulder, ,
BREAK, sitting, leaning forward slightly, front view, looking at viewer,
movie perspective, fractal background, abstract background, dynamic angle,
<lora:MoriiMee_Gothic_Niji_Style_Illustrious_r1:0.5> artist:moriimee,
<lora:PowerPuffMixLora:0.6>,
parameters:
negative_prompt: >-
bed, shine, (worst quality, low quality, sketch:1.1),error, bad anatomy,
bad hands, watermark, ugly, distorted, censored, lowers, multiple views,
signature, 3D,
output:
url: images/0-txt2img-20250224-163027-1312889651.png
base_model: LyliaEngine/ilustmix_v55
instance_prompt: None
license: cdla-permissive-2.0
---
# Power_Puff_MixLora
<Gallery />
## Model description
If youโve seen my work or followed my journey, you probably already know what PowerPuffMixLora is all about.
For those who donโt, I refined my portraits from the PowerPuffMix model and created a LoRA that enhances character aesthetics. The secret? Subtlety. Keeping the effect low is what makes the magic happenโat least, thatโs how I do it. ๐
## Source
https://civitai.com/models/1290802/powerpuffmixlora
## Credit
https://civitai.com/user/GZees
## Trigger words
You should use `None` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LyliaEngine/Power_Puff_MixLora/tree/main) them in the Files & versions tab.
|
TOMFORD79/Candy_3 | TOMFORD79 | "2025-04-19T18:48:10Z" | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-04-19T18:34:20Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
cwestnedge/gpt2-small-pubmed | cwestnedge | "2025-04-19T18:46:20Z" | 94 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"medical",
"en",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-17T17:23:37Z" | ---
library_name: transformers
tags:
- medical
license: mit
language:
- en
metrics:
- perplexity
base_model:
- openai-community/gpt2
pipeline_tag: text-generation
---
## Overview
This [pipeline](https://github.com/donkeyanaphora/SHALLOW_FUSION) was used to fineโtune GPTโ2 [small](https://huggingface.co/openai-community/gpt2), [medium](https://huggingface.co/openai-community/gpt2-medium), and [large](https://huggingface.co/openai-community/gpt2-large) on abstracts from PubMed's [baseline data](https://ftp.ncbi.nlm.nih.gov/pubmed/README.txt). Models were trained on a single A100 GPU in Google Colab.
---
## Training
#### Setup
- Single epoch over **221,709 batches ร 16 ร 1024 tokens** โ **3.63 billion tokens**
- Identical optimizer, learningโrate schedule, and hyperโparameters for all models
- No additional regularization or early stopping
#### Loss
Here are the the loss curves for GPTโ2 small, medium, and large fineโtuned on PubMed abstracts over single epoch.
- [Loss comparisons](https://huggingface.co/cwestnedge/gpt2-small-pubmed/blob/main/output.png)
---
## Evaluation
#### Dataset
Holdโout set of **1000 ร 16 ร 1024 tokens** (โ 16.4 M tokens) randomly sampled from PubMed abstracts, disjoint from the training split.
#### Metrics
Crossโentropy loss (averaged over all tokens) and derived perplexity (`ppl = exp(loss)`) on the holdโout set:
| Model | Parameters | Avg CE Loss โ | Perplexity โ |
|--------------------------|-----------:|-------------:|------------:|
| **gpt2โsmallโpubmed** | 124 M | 2.5017 | 12.20 |
| [gpt2โmediumโpubmed](https://huggingface.co/cwestnedge/gpt2-medium-pubmed) | 355 M | 2.2984 | 9.96 |
| [gpt2โlargeโpubmed](https://huggingface.co/cwestnedge/gpt2-large-pubmed) | 774 M | 2.1863 | 8.90 |
#### Caveats
- Perplexities are **inโdomain** (PubMed abstracts) and may not reflect generalโpurpose LM quality
- Only one epoch of training; performance likely improves with more epochs or hyperโparameter tuning
- Downstream biomedical benchmarks have not yet been conducted
---
## Usage
#### 1) Quickโstart with the ๐ค pipeline API
```python
from transformers import pipeline
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
generator = pipeline(
"text-generation",
model="cwestnedge/gpt2-small-pubmed",
tokenizer="openai-community/gpt2",
device=device,
)
prompt = (
"Background: The CRISPRโCas9 system has revolutionized gene editing. "
"In this study, we evaluate its efficacy in"
)
out = generator(
prompt,
max_length=200,
temperature=1e-9,
top_p=1e-9,
num_return_sequences=1,
truncation=True,
)
print(out[0]["generated_text"])
```
#### 2) Manual load + generate for finer control
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "cwestnedge/gpt2-small-pubmed"
tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2")
model = AutoModelForCausalLM.from_pretrained(model_name).to(device)
inputs = tokenizer(
"Methods: We performed a doubleโblind randomized trial to assess",
return_tensors="pt",
).to(device)
gen_ids = model.generate(
**inputs,
max_length=150,
num_beams=5,
no_repeat_ngram_size=2,
early_stopping=True,
)
print(tokenizer.decode(gen_ids[0], skip_special_tokens=True))
```
#### 3) Scoring / perplexity
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "cwestnedge/gpt2-small-pubmed"
tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2")
model = AutoModelForCausalLM.from_pretrained(model_name).to(device)
text = (
"Tetralogy of Fallot is a rare congenital heart condition that is present at birth."
)
enc = tokenizer(text, return_tensors="pt").to(device)
with torch.no_grad():
outputs = model(**enc, labels=enc.input_ids)
loss = outputs.loss
ppl = torch.exp(loss)
print(f"CE loss: {loss:.4f} โ Perplexity: {ppl:.2f}")
``` |
mradermacher/Umbr0x-1.5B-V3.1-16bit-2-GGUF | mradermacher | "2025-04-19T18:44:36Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"sft",
"en",
"base_model:gabrielbosse9/Umbr0x-1.5B-V3.1-16bit-2",
"base_model:quantized:gabrielbosse9/Umbr0x-1.5B-V3.1-16bit-2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-04-19T18:01:49Z" | ---
base_model: gabrielbosse9/Umbr0x-1.5B-V3.1-16bit-2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/gabrielbosse9/Umbr0x-1.5B-V3.1-16bit-2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Umbr0x-1.5B-V3.1-16bit-2-GGUF/resolve/main/Umbr0x-1.5B-V3.1-16bit-2.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Umbr0x-1.5B-V3.1-16bit-2-GGUF/resolve/main/Umbr0x-1.5B-V3.1-16bit-2.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Umbr0x-1.5B-V3.1-16bit-2-GGUF/resolve/main/Umbr0x-1.5B-V3.1-16bit-2.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Umbr0x-1.5B-V3.1-16bit-2-GGUF/resolve/main/Umbr0x-1.5B-V3.1-16bit-2.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Umbr0x-1.5B-V3.1-16bit-2-GGUF/resolve/main/Umbr0x-1.5B-V3.1-16bit-2.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Umbr0x-1.5B-V3.1-16bit-2-GGUF/resolve/main/Umbr0x-1.5B-V3.1-16bit-2.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Umbr0x-1.5B-V3.1-16bit-2-GGUF/resolve/main/Umbr0x-1.5B-V3.1-16bit-2.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Umbr0x-1.5B-V3.1-16bit-2-GGUF/resolve/main/Umbr0x-1.5B-V3.1-16bit-2.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Umbr0x-1.5B-V3.1-16bit-2-GGUF/resolve/main/Umbr0x-1.5B-V3.1-16bit-2.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Umbr0x-1.5B-V3.1-16bit-2-GGUF/resolve/main/Umbr0x-1.5B-V3.1-16bit-2.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Umbr0x-1.5B-V3.1-16bit-2-GGUF/resolve/main/Umbr0x-1.5B-V3.1-16bit-2.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Umbr0x-1.5B-V3.1-16bit-2-GGUF/resolve/main/Umbr0x-1.5B-V3.1-16bit-2.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
sedfg4gh/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-diving_vigilant_hawk | sedfg4gh | "2025-04-19T18:44:24Z" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am diving vigilant hawk",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-15T15:24:07Z" | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-diving_vigilant_hawk
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am diving vigilant hawk
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-diving_vigilant_hawk
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sedfg4gh/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-diving_vigilant_hawk", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
cwestnedge/gpt2-large-pubmed | cwestnedge | "2025-04-19T18:42:09Z" | 182 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"medical",
"en",
"base_model:openai-community/gpt2-large",
"base_model:finetune:openai-community/gpt2-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-12T02:39:39Z" | ---
library_name: transformers
tags:
- medical
license: mit
language:
- en
metrics:
- perplexity
base_model:
- openai-community/gpt2-large
pipeline_tag: text-generation
---
## Overview
This [pipeline](https://github.com/donkeyanaphora/SHALLOW_FUSION) was used to fineโtune GPTโ2 [small](https://huggingface.co/openai-community/gpt2), [medium](https://huggingface.co/openai-community/gpt2-medium), and [large](https://huggingface.co/openai-community/gpt2-large) on abstracts from PubMed's [baseline data](https://ftp.ncbi.nlm.nih.gov/pubmed/README.txt). Models were trained on a single A100 GPU in Google Colab.
---
## Training
#### Setup
- Single epoch over **221,709 batches ร 16 ร 1024 tokens** โ **3.63 billion tokens**
- Identical optimizer, learningโrate schedule, and hyperโparameters for all models
- No additional regularization or early stopping
#### Loss
Here are the the loss curves for GPTโ2 small, medium, and large fineโtuned on PubMed abstracts over single epoch.
- [Loss comparisons](https://huggingface.co/cwestnedge/gpt2-small-pubmed/blob/main/output.png)
---
## Evaluation
#### Dataset
Holdโout set of **1000 ร 16 ร 1024 tokens** (โ 16.4 M tokens) randomly sampled from PubMed abstracts, disjoint from the training split.
#### Metrics
Crossโentropy loss (averaged over all tokens) and derived perplexity (`ppl = exp(loss)`) on the holdโout set:
| Model | Parameters | Avg CE Loss โ | Perplexity โ |
|--------------------------|-----------:|-------------:|------------:|
| [gpt2โsmallโpubmed](https://huggingface.co/cwestnedge/gpt2-small-pubmed) | 124 M | 2.5017 | 12.20 |
| [gpt2โmediumโpubmed](https://huggingface.co/cwestnedge/gpt2-medium-pubmed) | 355 M | 2.2984 | 9.96 |
| **gpt2โlargeโpubmed** | 774 M | 2.1863 | 8.90 |
#### Caveats
- Perplexities are **inโdomain** (PubMed abstracts) and may not reflect generalโpurpose LM quality
- Only one epoch of training; performance likely improves with more epochs or hyperโparameter tuning
- Downstream biomedical benchmarks have not yet been conducted
---
## Usage
#### 1) Quickโstart with the ๐ค pipeline API
```python
from transformers import pipeline
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
generator = pipeline(
"text-generation",
model="cwestnedge/gpt2-large-pubmed",
tokenizer="openai-community/gpt2-large",
device=device,
)
prompt = (
"Background: The CRISPRโCas9 system has revolutionized gene editing. "
"In this study, we evaluate its efficacy in"
)
out = generator(
prompt,
max_length=200,
temperature=1e-9,
top_p=1e-9,
num_return_sequences=1,
truncation=True,
)
print(out[0]["generated_text"])
```
#### 2) Manual load + generate for finer control
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "cwestnedge/gpt2-large-pubmed"
tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2-large")
model = AutoModelForCausalLM.from_pretrained(model_name).to(device)
inputs = tokenizer(
"Methods: We performed a doubleโblind randomized trial to assess",
return_tensors="pt",
).to(device)
gen_ids = model.generate(
**inputs,
max_length=150,
num_beams=5,
no_repeat_ngram_size=2,
early_stopping=True,
)
print(tokenizer.decode(gen_ids[0], skip_special_tokens=True))
```
#### 3) Scoring / perplexity
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "cwestnedge/gpt2-large-pubmed"
tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2-large")
model = AutoModelForCausalLM.from_pretrained(model_name).to(device)
text = (
"Tetralogy of Fallot is a rare congenital heart condition that is present at birth."
)
enc = tokenizer(text, return_tensors="pt").to(device)
with torch.no_grad():
outputs = model(**enc, labels=enc.input_ids)
loss = outputs.loss
ppl = torch.exp(loss)
print(f"CE loss: {loss:.4f} โ Perplexity: {ppl:.2f}")
``` |
edumunozsala/gemma3-1b-it-financial-sent-analysis | edumunozsala | "2025-04-19T18:39:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T18:37:59Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RJTPP/stage1-VL-3b-v6-step-test0 | RJTPP | "2025-04-19T18:37:57Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_5_vl",
"trl",
"en",
"base_model:unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-04-19T18:37:44Z" | ---
base_model: unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** RJTPP
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
luckeciano/Qwen-2.5-7B-RL-AC-BigLRv3-Fast-4-v5-Train-Marg | luckeciano | "2025-04-19T18:37:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T16:17:36Z" | ---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-RL-AC-BigLRv3-Fast-4-v5-Train-Marg
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-RL-AC-BigLRv3-Fast-4-v5-Train-Marg
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-RL-AC-BigLRv3-Fast-4-v5-Train-Marg", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/MaxEntLLMs/runs/hd0zlwq4)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
sbapan41/OCR_Data_Extraction | sbapan41 | "2025-04-19T18:33:49Z" | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"code",
"data",
"ocr",
"data_extraction",
"feature-extraction",
"en",
"base_model:naver-clova-ix/donut-base",
"base_model:adapter:naver-clova-ix/donut-base",
"license:apache-2.0",
"region:us"
] | feature-extraction | "2025-04-19T18:01:37Z" | ---
license: apache-2.0
language:
- en
metrics:
- accuracy
base_model:
- naver-clova-ix/donut-base
pipeline_tag: feature-extraction
library_name: adapter-transformers
tags:
- code
- data
- ocr
- data_extraction
--- |
rbelanec/train_cola_1744902670 | rbelanec | "2025-04-19T18:31:42Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lntuning",
"generated_from_trainer",
"base_model:google/gemma-3-1b-it",
"base_model:adapter:google/gemma-3-1b-it",
"license:gemma",
"region:us"
] | null | "2025-04-19T11:01:49Z" | ---
library_name: peft
license: gemma
base_model: google/gemma-3-1b-it
tags:
- llama-factory
- lntuning
- generated_from_trainer
model-index:
- name: train_cola_1744902670
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cola_1744902670
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) on the cola dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1314
- Num Input Tokens Seen: 31253176
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- training_steps: 40000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-------:|:-----:|:---------------:|:-----------------:|
| 1.0199 | 0.4158 | 200 | 0.9345 | 156832 |
| 0.4834 | 0.8316 | 400 | 0.4653 | 313248 |
| 0.2765 | 1.2474 | 600 | 0.2476 | 469520 |
| 0.262 | 1.6632 | 800 | 0.1840 | 625360 |
| 0.2552 | 2.0790 | 1000 | 0.1702 | 782304 |
| 0.1828 | 2.4948 | 1200 | 0.1651 | 938560 |
| 0.1741 | 2.9106 | 1400 | 0.1595 | 1094144 |
| 0.1644 | 3.3264 | 1600 | 0.1559 | 1250544 |
| 0.1567 | 3.7422 | 1800 | 0.1537 | 1407440 |
| 0.1828 | 4.1580 | 2000 | 0.1515 | 1563512 |
| 0.1603 | 4.5738 | 2200 | 0.1537 | 1719064 |
| 0.1644 | 4.9896 | 2400 | 0.1553 | 1875384 |
| 0.1483 | 5.4054 | 2600 | 0.1473 | 2031440 |
| 0.1727 | 5.8212 | 2800 | 0.1462 | 2187952 |
| 0.1743 | 6.2370 | 3000 | 0.1477 | 2344864 |
| 0.1666 | 6.6528 | 3200 | 0.1476 | 2500448 |
| 0.1717 | 7.0686 | 3400 | 0.1465 | 2656400 |
| 0.1783 | 7.4844 | 3600 | 0.1454 | 2812912 |
| 0.1462 | 7.9002 | 3800 | 0.1434 | 2968816 |
| 0.1426 | 8.3160 | 4000 | 0.1443 | 3124448 |
| 0.1503 | 8.7318 | 4200 | 0.1443 | 3280320 |
| 0.138 | 9.1476 | 4400 | 0.1478 | 3437072 |
| 0.1528 | 9.5634 | 4600 | 0.1442 | 3593520 |
| 0.1484 | 9.9792 | 4800 | 0.1427 | 3750544 |
| 0.1359 | 10.3950 | 5000 | 0.1410 | 3905920 |
| 0.1586 | 10.8108 | 5200 | 0.1439 | 4063008 |
| 0.1458 | 11.2266 | 5400 | 0.1405 | 4219472 |
| 0.1543 | 11.6424 | 5600 | 0.1467 | 4376048 |
| 0.1619 | 12.0582 | 5800 | 0.1440 | 4531752 |
| 0.1541 | 12.4740 | 6000 | 0.1414 | 4687112 |
| 0.1272 | 12.8898 | 6200 | 0.1411 | 4843464 |
| 0.1504 | 13.3056 | 6400 | 0.1392 | 4999648 |
| 0.1758 | 13.7214 | 6600 | 0.1411 | 5157152 |
| 0.1237 | 14.1372 | 6800 | 0.1403 | 5312328 |
| 0.1594 | 14.5530 | 7000 | 0.1401 | 5468680 |
| 0.1644 | 14.9688 | 7200 | 0.1385 | 5624776 |
| 0.1156 | 15.3846 | 7400 | 0.1374 | 5782032 |
| 0.1205 | 15.8004 | 7600 | 0.1384 | 5938000 |
| 0.1583 | 16.2162 | 7800 | 0.1493 | 6094536 |
| 0.1725 | 16.6320 | 8000 | 0.1436 | 6250760 |
| 0.1353 | 17.0478 | 8200 | 0.1427 | 6406616 |
| 0.1372 | 17.4636 | 8400 | 0.1361 | 6563416 |
| 0.1305 | 17.8794 | 8600 | 0.1383 | 6719288 |
| 0.1529 | 18.2952 | 8800 | 0.1357 | 6875592 |
| 0.1435 | 18.7110 | 9000 | 0.1410 | 7032392 |
| 0.1446 | 19.1268 | 9200 | 0.1349 | 7188120 |
| 0.1407 | 19.5426 | 9400 | 0.1371 | 7344760 |
| 0.1478 | 19.9584 | 9600 | 0.1380 | 7501144 |
| 0.1349 | 20.3742 | 9800 | 0.1388 | 7657160 |
| 0.1338 | 20.7900 | 10000 | 0.1353 | 7813128 |
| 0.1846 | 21.2058 | 10200 | 0.1427 | 7969880 |
| 0.1395 | 21.6216 | 10400 | 0.1417 | 8126392 |
| 0.1701 | 22.0374 | 10600 | 0.1367 | 8282480 |
| 0.1647 | 22.4532 | 10800 | 0.1368 | 8438992 |
| 0.1144 | 22.8690 | 11000 | 0.1404 | 8595376 |
| 0.14 | 23.2848 | 11200 | 0.1351 | 8751352 |
| 0.1326 | 23.7006 | 11400 | 0.1350 | 8907960 |
| 0.1497 | 24.1164 | 11600 | 0.1408 | 9064424 |
| 0.1585 | 24.5322 | 11800 | 0.1397 | 9220456 |
| 0.1264 | 24.9480 | 12000 | 0.1397 | 9376488 |
| 0.1415 | 25.3638 | 12200 | 0.1348 | 9533208 |
| 0.1398 | 25.7796 | 12400 | 0.1353 | 9689464 |
| 0.1284 | 26.1954 | 12600 | 0.1417 | 9845048 |
| 0.1232 | 26.6112 | 12800 | 0.1340 | 10001784 |
| 0.1149 | 27.0270 | 13000 | 0.1344 | 10157800 |
| 0.1254 | 27.4428 | 13200 | 0.1350 | 10313128 |
| 0.1372 | 27.8586 | 13400 | 0.1364 | 10469384 |
| 0.1282 | 28.2744 | 13600 | 0.1339 | 10625944 |
| 0.0999 | 28.6902 | 13800 | 0.1389 | 10782456 |
| 0.1528 | 29.1060 | 14000 | 0.1359 | 10938304 |
| 0.1064 | 29.5218 | 14200 | 0.1346 | 11094528 |
| 0.1041 | 29.9376 | 14400 | 0.1406 | 11250976 |
| 0.1697 | 30.3534 | 14600 | 0.1359 | 11406672 |
| 0.1442 | 30.7692 | 14800 | 0.1402 | 11562768 |
| 0.1462 | 31.1850 | 15000 | 0.1345 | 11719016 |
| 0.0968 | 31.6008 | 15200 | 0.1338 | 11875368 |
| 0.1253 | 32.0166 | 15400 | 0.1368 | 12031048 |
| 0.102 | 32.4324 | 15600 | 0.1354 | 12187432 |
| 0.1342 | 32.8482 | 15800 | 0.1343 | 12343432 |
| 0.1112 | 33.2640 | 16000 | 0.1366 | 12500472 |
| 0.1647 | 33.6798 | 16200 | 0.1346 | 12656248 |
| 0.1175 | 34.0956 | 16400 | 0.1340 | 12811752 |
| 0.1261 | 34.5114 | 16600 | 0.1314 | 12968104 |
| 0.1259 | 34.9272 | 16800 | 0.1344 | 13124392 |
| 0.1171 | 35.3430 | 17000 | 0.1356 | 13281144 |
| 0.1593 | 35.7588 | 17200 | 0.1362 | 13437720 |
| 0.1429 | 36.1746 | 17400 | 0.1326 | 13594448 |
| 0.1451 | 36.5904 | 17600 | 0.1338 | 13750544 |
| 0.1583 | 37.0062 | 17800 | 0.1328 | 13906304 |
| 0.1447 | 37.4220 | 18000 | 0.1364 | 14062784 |
| 0.1262 | 37.8378 | 18200 | 0.1325 | 14219168 |
| 0.1201 | 38.2536 | 18400 | 0.1346 | 14375024 |
| 0.1666 | 38.6694 | 18600 | 0.1325 | 14530800 |
| 0.1433 | 39.0852 | 18800 | 0.1362 | 14687808 |
| 0.1106 | 39.5010 | 19000 | 0.1360 | 14843360 |
| 0.1105 | 39.9168 | 19200 | 0.1373 | 14999808 |
| 0.114 | 40.3326 | 19400 | 0.1323 | 15155496 |
| 0.1028 | 40.7484 | 19600 | 0.1353 | 15311688 |
| 0.1374 | 41.1642 | 19800 | 0.1333 | 15468264 |
| 0.1481 | 41.5800 | 20000 | 0.1355 | 15624072 |
| 0.1353 | 41.9958 | 20200 | 0.1332 | 15780456 |
| 0.1048 | 42.4116 | 20400 | 0.1330 | 15936432 |
| 0.1436 | 42.8274 | 20600 | 0.1346 | 16092272 |
| 0.1155 | 43.2432 | 20800 | 0.1355 | 16249048 |
| 0.1501 | 43.6590 | 21000 | 0.1370 | 16405368 |
| 0.1334 | 44.0748 | 21200 | 0.1328 | 16561000 |
| 0.1337 | 44.4906 | 21400 | 0.1345 | 16718312 |
| 0.1296 | 44.9064 | 21600 | 0.1358 | 16874632 |
| 0.1215 | 45.3222 | 21800 | 0.1333 | 17031680 |
| 0.1295 | 45.7380 | 22000 | 0.1345 | 17188288 |
| 0.1301 | 46.1538 | 22200 | 0.1336 | 17345048 |
| 0.1175 | 46.5696 | 22400 | 0.1344 | 17501560 |
| 0.1332 | 46.9854 | 22600 | 0.1323 | 17657336 |
| 0.0998 | 47.4012 | 22800 | 0.1350 | 17813576 |
| 0.1206 | 47.8170 | 23000 | 0.1342 | 17970024 |
| 0.0966 | 48.2328 | 23200 | 0.1317 | 18126280 |
| 0.1542 | 48.6486 | 23400 | 0.1341 | 18282568 |
| 0.118 | 49.0644 | 23600 | 0.1394 | 18438872 |
| 0.1429 | 49.4802 | 23800 | 0.1349 | 18595416 |
| 0.1464 | 49.8960 | 24000 | 0.1339 | 18751672 |
| 0.1389 | 50.3119 | 24200 | 0.1325 | 18906848 |
| 0.138 | 50.7277 | 24400 | 0.1341 | 19064192 |
| 0.1481 | 51.1435 | 24600 | 0.1418 | 19219856 |
| 0.13 | 51.5593 | 24800 | 0.1336 | 19376464 |
| 0.1503 | 51.9751 | 25000 | 0.1340 | 19532272 |
| 0.1321 | 52.3909 | 25200 | 0.1334 | 19688288 |
| 0.1277 | 52.8067 | 25400 | 0.1384 | 19844672 |
| 0.1118 | 53.2225 | 25600 | 0.1337 | 20001552 |
| 0.105 | 53.6383 | 25800 | 0.1323 | 20157424 |
| 0.1384 | 54.0541 | 26000 | 0.1336 | 20313440 |
| 0.1142 | 54.4699 | 26200 | 0.1369 | 20469664 |
| 0.1325 | 54.8857 | 26400 | 0.1321 | 20625984 |
| 0.1415 | 55.3015 | 26600 | 0.1352 | 20781904 |
| 0.1186 | 55.7173 | 26800 | 0.1367 | 20938512 |
| 0.1281 | 56.1331 | 27000 | 0.1335 | 21095008 |
| 0.1648 | 56.5489 | 27200 | 0.1367 | 21251264 |
| 0.141 | 56.9647 | 27400 | 0.1339 | 21407744 |
| 0.1336 | 57.3805 | 27600 | 0.1331 | 21564560 |
| 0.127 | 57.7963 | 27800 | 0.1326 | 21720560 |
| 0.1098 | 58.2121 | 28000 | 0.1356 | 21877024 |
| 0.1057 | 58.6279 | 28200 | 0.1335 | 22033344 |
| 0.1215 | 59.0437 | 28400 | 0.1388 | 22189872 |
| 0.1412 | 59.4595 | 28600 | 0.1318 | 22345712 |
| 0.1332 | 59.8753 | 28800 | 0.1341 | 22502352 |
| 0.132 | 60.2911 | 29000 | 0.1353 | 22658440 |
| 0.1477 | 60.7069 | 29200 | 0.1339 | 22814056 |
| 0.1082 | 61.1227 | 29400 | 0.1343 | 22970680 |
| 0.1747 | 61.5385 | 29600 | 0.1353 | 23126776 |
| 0.1357 | 61.9543 | 29800 | 0.1327 | 23283064 |
| 0.1002 | 62.3701 | 30000 | 0.1340 | 23440000 |
| 0.1126 | 62.7859 | 30200 | 0.1356 | 23596224 |
| 0.1258 | 63.2017 | 30400 | 0.1352 | 23751880 |
| 0.1333 | 63.6175 | 30600 | 0.1337 | 23907624 |
| 0.089 | 64.0333 | 30800 | 0.1337 | 24063864 |
| 0.1212 | 64.4491 | 31000 | 0.1329 | 24219608 |
| 0.1456 | 64.8649 | 31200 | 0.1331 | 24376856 |
| 0.1371 | 65.2807 | 31400 | 0.1335 | 24533352 |
| 0.1342 | 65.6965 | 31600 | 0.1355 | 24688616 |
| 0.1394 | 66.1123 | 31800 | 0.1324 | 24844832 |
| 0.1321 | 66.5281 | 32000 | 0.1372 | 25002240 |
| 0.1284 | 66.9439 | 32200 | 0.1333 | 25158144 |
| 0.1364 | 67.3597 | 32400 | 0.1336 | 25314384 |
| 0.1013 | 67.7755 | 32600 | 0.1330 | 25470704 |
| 0.1333 | 68.1913 | 32800 | 0.1330 | 25627200 |
| 0.1057 | 68.6071 | 33000 | 0.1366 | 25783456 |
| 0.1267 | 69.0229 | 33200 | 0.1339 | 25940304 |
| 0.1145 | 69.4387 | 33400 | 0.1341 | 26096432 |
| 0.1038 | 69.8545 | 33600 | 0.1334 | 26253360 |
| 0.1024 | 70.2703 | 33800 | 0.1343 | 26408736 |
| 0.1166 | 70.6861 | 34000 | 0.1333 | 26565056 |
| 0.1616 | 71.1019 | 34200 | 0.1350 | 26721176 |
| 0.1192 | 71.5177 | 34400 | 0.1353 | 26877368 |
| 0.1183 | 71.9335 | 34600 | 0.1358 | 27033912 |
| 0.1527 | 72.3493 | 34800 | 0.1323 | 27190376 |
| 0.146 | 72.7651 | 35000 | 0.1349 | 27347112 |
| 0.1274 | 73.1809 | 35200 | 0.1352 | 27503480 |
| 0.1277 | 73.5967 | 35400 | 0.1334 | 27660280 |
| 0.1407 | 74.0125 | 35600 | 0.1333 | 27815536 |
| 0.1269 | 74.4283 | 35800 | 0.1353 | 27971600 |
| 0.1255 | 74.8441 | 36000 | 0.1342 | 28127664 |
| 0.1432 | 75.2599 | 36200 | 0.1354 | 28284736 |
| 0.1083 | 75.6757 | 36400 | 0.1359 | 28440672 |
| 0.1248 | 76.0915 | 36600 | 0.1347 | 28596968 |
| 0.0944 | 76.5073 | 36800 | 0.1322 | 28753672 |
| 0.1213 | 76.9231 | 37000 | 0.1325 | 28909800 |
| 0.1175 | 77.3389 | 37200 | 0.1343 | 29066104 |
| 0.1217 | 77.7547 | 37400 | 0.1343 | 29222328 |
| 0.115 | 78.1705 | 37600 | 0.1353 | 29378344 |
| 0.1197 | 78.5863 | 37800 | 0.1370 | 29534888 |
| 0.1422 | 79.0021 | 38000 | 0.1331 | 29690392 |
| 0.1215 | 79.4179 | 38200 | 0.1363 | 29846936 |
| 0.1302 | 79.8337 | 38400 | 0.1352 | 30002424 |
| 0.1303 | 80.2495 | 38600 | 0.1365 | 30158536 |
| 0.121 | 80.6653 | 38800 | 0.1348 | 30314984 |
| 0.1364 | 81.0811 | 39000 | 0.1343 | 30471288 |
| 0.1273 | 81.4969 | 39200 | 0.1329 | 30628024 |
| 0.122 | 81.9127 | 39400 | 0.1361 | 30784376 |
| 0.1142 | 82.3285 | 39600 | 0.1341 | 30940904 |
| 0.1026 | 82.7443 | 39800 | 0.1340 | 31097352 |
| 0.124 | 83.1601 | 40000 | 0.1339 | 31253176 |
### Framework versions
- PEFT 0.15.1
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
raulgdp/Mistral-7B-Instruct-v0.3-JEP | raulgdp | "2025-04-19T18:29:52Z" | 1 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"es",
"dataset:jdavit/colombian-conflict-SQA",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | "2025-04-18T17:34:57Z" | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.3
tags:
- generated_from_trainer
model-index:
- name: Mistral-7B-Instruct-v0.3-JEP
results: []
datasets:
- jdavit/colombian-conflict-SQA
language:
- es
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.3-JEP
รste modelo fue afinado con [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)
sobre el corpus [jdavit/colombian-conflict-SQA](jdavit/colombian-conflict-SQA) que tiene informaciรณn pรบblica de la JEP logrando una funciรณn de perdida
entre el conjunto de entrenamiento y el de testeo de 0.9339.
## Model description
Este es un modelo entrenado sobre el modelo original de [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)
con el fin de obtner un modelo para un chatbot que responda a preguntas de los casos presentados en la JEP-Colombia. Este
es un ejercicio acadรฉmico realizado por estudiantes de la Univalle.
## Intended uses & limitations
More information needed
## Training and evaluation data
El datasete [jdavit/colombian-conflict-SQA](jdavit/colombian-conflict-SQA) estรก conformado de 2896 ejemplos de
pregunta-respuesta y contexto.
## Training procedure
El modelo fue entrenado por 4 horas con:
trainable params: 6,815,744 || all params: 7,254,839,296 || trainable%: 0.0939
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1764 | 0.1535 | 100 | 1.1504 |
| 1.0487 | 0.3070 | 200 | 1.0548 |
| 0.9853 | 0.4605 | 300 | 1.0175 |
| 0.9844 | 0.6140 | 400 | 0.9919 |
| 1.011 | 0.7675 | 500 | 0.9780 |
| 0.9396 | 0.9210 | 600 | 0.9663 |
| 0.9259 | 1.0737 | 700 | 0.9569 |
| 0.9444 | 1.2272 | 800 | 0.9483 |
| 0.8928 | 1.3807 | 900 | 0.9415 |
| 0.9195 | 1.5342 | 1000 | 0.9364 |
| 0.8967 | 1.6876 | 1100 | 0.9338 |
| 0.927 | 1.8411 | 1200 | 0.9300 |
| 0.9417 | 1.9946 | 1300 | 0.9263 |
| 0.9198 | 2.1474 | 1400 | 0.9276 |
| 0.9108 | 2.3008 | 1500 | 0.9237 |
| 0.8971 | 2.4543 | 1600 | 0.9223 |
| 0.8758 | 2.6078 | 1700 | 0.9199 |
| 0.8681 | 2.7613 | 1800 | 0.9169 |
| 0.8557 | 2.9148 | 1900 | 0.9153 |
| 0.82 | 3.0675 | 2000 | 0.9161 |
| 0.8379 | 3.2210 | 2100 | 0.9170 |
| 0.8414 | 3.3745 | 2200 | 0.9161 |
| 0.9164 | 3.5280 | 2300 | 0.9141 |
| 0.8764 | 3.6815 | 2400 | 0.9101 |
| 0.8449 | 3.8350 | 2500 | 0.9094 |
| 0.8708 | 3.9885 | 2600 | 0.9088 |
| 0.83 | 4.1412 | 2700 | 0.9132 |
| 0.7793 | 4.2947 | 2800 | 0.9148 |
| 0.8527 | 4.4482 | 2900 | 0.9120 |
| 0.7941 | 4.6017 | 3000 | 0.9102 |
| 0.8103 | 4.7552 | 3100 | 0.9111 |
| 0.7991 | 4.9087 | 3200 | 0.9083 |
| 0.7791 | 5.0614 | 3300 | 0.9126 |
| 0.8297 | 5.2149 | 3400 | 0.9154 |
| 0.739 | 5.3684 | 3500 | 0.9181 |
| 0.8456 | 5.5219 | 3600 | 0.9105 |
| 0.826 | 5.6754 | 3700 | 0.9135 |
| 0.8336 | 5.8289 | 3800 | 0.9127 |
| 0.7995 | 5.9823 | 3900 | 0.9134 |
| 0.7782 | 6.1351 | 4000 | 0.9207 |
| 0.7822 | 6.2886 | 4100 | 0.9170 |
| 0.7556 | 6.4421 | 4200 | 0.9182 |
| 0.7522 | 6.5955 | 4300 | 0.9213 |
| 0.7669 | 6.7490 | 4400 | 0.9168 |
| 0.7503 | 6.9025 | 4500 | 0.9173 |
| 0.7739 | 7.0553 | 4600 | 0.9217 |
| 0.7699 | 7.2087 | 4700 | 0.9293 |
| 0.761 | 7.3622 | 4800 | 0.9234 |
| 0.7257 | 7.5157 | 4900 | 0.9269 |
| 0.7394 | 7.6692 | 5000 | 0.9233 |
| 0.7354 | 7.8227 | 5100 | 0.9218 |
| 0.8162 | 7.9762 | 5200 | 0.9209 |
| 0.7276 | 8.1289 | 5300 | 0.9294 |
| 0.7477 | 8.2824 | 5400 | 0.9299 |
| 0.7278 | 8.4359 | 5500 | 0.9282 |
| 0.6571 | 8.5894 | 5600 | 0.9297 |
| 0.7494 | 8.7429 | 5700 | 0.9286 |
| 0.767 | 8.8964 | 5800 | 0.9267 |
| 0.6792 | 9.0491 | 5900 | 0.9338 |
| 0.7053 | 9.2026 | 6000 | 0.9350 |
| 0.706 | 9.3561 | 6100 | 0.9351 |
| 0.7232 | 9.5096 | 6200 | 0.9334 |
| 0.7301 | 9.6631 | 6300 | 0.9332 |
| 0.7424 | 9.8166 | 6400 | 0.9344 |
| 0.6775 | 9.9701 | 6500 | 0.9339 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1 |
mradermacher/Llama_3.3_70b_DarkHorse-GGUF | mradermacher | "2025-04-19T18:29:05Z" | 259 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Nexesenex/Llama_3.3_70b_DarkHorse",
"base_model:quantized:Nexesenex/Llama_3.3_70b_DarkHorse",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-02T05:48:50Z" | ---
base_model: Nexesenex/Llama_3.3_70b_DarkHorse
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Nexesenex/Llama_3.3_70b_DarkHorse
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama_3.3_70b_DarkHorse-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama_3.3_70b_DarkHorse-GGUF/resolve/main/Llama_3.3_70b_DarkHorse.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.3_70b_DarkHorse-GGUF/resolve/main/Llama_3.3_70b_DarkHorse.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.3_70b_DarkHorse-GGUF/resolve/main/Llama_3.3_70b_DarkHorse.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.3_70b_DarkHorse-GGUF/resolve/main/Llama_3.3_70b_DarkHorse.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.3_70b_DarkHorse-GGUF/resolve/main/Llama_3.3_70b_DarkHorse.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.3_70b_DarkHorse-GGUF/resolve/main/Llama_3.3_70b_DarkHorse.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.3_70b_DarkHorse-GGUF/resolve/main/Llama_3.3_70b_DarkHorse.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.3_70b_DarkHorse-GGUF/resolve/main/Llama_3.3_70b_DarkHorse.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.3_70b_DarkHorse-GGUF/resolve/main/Llama_3.3_70b_DarkHorse.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama_3.3_70b_DarkHorse-GGUF/resolve/main/Llama_3.3_70b_DarkHorse.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama_3.3_70b_DarkHorse-GGUF/resolve/main/Llama_3.3_70b_DarkHorse.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Llama_3.3_70b_DarkHorse-GGUF/resolve/main/Llama_3.3_70b_DarkHorse.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama_3.3_70b_DarkHorse-GGUF/resolve/main/Llama_3.3_70b_DarkHorse.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
dzanbek/c6b95c38-8cc1-4ea4-935c-5ea479bcc204 | dzanbek | "2025-04-19T18:26:28Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-04-19T18:07:12Z" | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c6b95c38-8cc1-4ea4-935c-5ea479bcc204
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b1c6414ccd76c2ee_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b1c6414ccd76c2ee_train_data.json
type:
field_input: post_text
field_instruction: post_title
field_output: comment_text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: dzanbek/c6b95c38-8cc1-4ea4-935c-5ea479bcc204
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/b1c6414ccd76c2ee_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5f4c36dc-c48a-4401-aefb-c95ccd4f0d5a
wandb_project: 01-31
wandb_run: your_name
wandb_runid: 5f4c36dc-c48a-4401-aefb-c95ccd4f0d5a
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c6b95c38-8cc1-4ea4-935c-5ea479bcc204
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0147 | 150 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Navi004/deepseek-r1-finetuned_lora-adapter-Batch9_v3_DIAC_WoZ | Navi004 | "2025-04-19T18:19:41Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-04-19T18:19:18Z" | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Navi004
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bachzz/PPO-LunarLander-v2-PPO_accel_1000_iters | bachzz | "2025-04-19T18:15:07Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2025-04-19T18:14:59Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -558.62 +/- 472.25
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dzanbek/e37395af-f8b0-42ed-9949-2810fe4904bc | dzanbek | "2025-04-19T18:04:04Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer",
"base_model:adapter:NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-04-19T17:51:23Z" | ---
library_name: peft
license: other
base_model: NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e37395af-f8b0-42ed-9949-2810fe4904bc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- bab29fa9c9fe164a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/bab29fa9c9fe164a_train_data.json
type:
field_input: document_title
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: dzanbek/e37395af-f8b0-42ed-9949-2810fe4904bc
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/bab29fa9c9fe164a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 46afd46c-955f-4e61-bbd4-8438f9047ee2
wandb_project: 01-31
wandb_run: your_name
wandb_runid: 46afd46c-955f-4e61-bbd4-8438f9047ee2
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e37395af-f8b0-42ed-9949-2810fe4904bc
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4446
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.3316 | 0.0447 | 150 | 2.4446 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
MinaMila/llama_instbase_LoRa_Adult_cfda_ep1_22 | MinaMila | "2025-04-19T18:03:42Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-19T18:03:37Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TheCluster/VL-Rethinker-72B-mlx-4bit | TheCluster | "2025-04-19T18:03:41Z" | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen2_5_vl",
"chat",
"apple",
"4bit",
"multimodal",
"visual-question-answering",
"en",
"arxiv:2504.08837",
"base_model:TIGER-Lab/VL-Rethinker-72B",
"base_model:quantized:TIGER-Lab/VL-Rethinker-72B",
"license:apache-2.0",
"region:us"
] | visual-question-answering | "2025-04-18T20:10:10Z" | ---
license: apache-2.0
base_model:
- TIGER-Lab/VL-Rethinker-72B
base_model_relation: quantized
pipeline_tag: visual-question-answering
tags:
- chat
- mlx
- apple
- 4bit
- multimodal
language:
- en
library_name: mlx
---
# VL-Rethinker-72B 4-bit MLX
This model was converted to MLX format from [`TIGER-Lab/VL-Rethinker-72B`](https://huggingface.co/TIGER-Lab/VL-Rethinker-72B) using mlx-vlm version **0.1.23**.
Refer to the [original model card](https://huggingface.co/TIGER-Lab/VL-Rethinker-72B) and [**๐Paper**](https://arxiv.org/abs/2504.08837) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model TheCluster/VL-Rethinker-72B-mlx-4bit --max-tokens 512 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
rupa1210/phi-2-role-play | rupa1210 | "2025-04-19T18:03:36Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"endpoints_compatible",
"region:us"
] | null | "2025-04-19T18:03:19Z" | ---
base_model: microsoft/phi-2
library_name: transformers
model_name: phi-2-role-play
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for phi-2-role-play
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rupa1210/phi-2-role-play", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu118
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรยฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
kiwikiw/zudo | kiwikiw | "2025-04-19T18:03:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T17:59:13Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nomadrp/dpo-v1 | nomadrp | "2025-04-19T18:02:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-04-19T17:36:59Z" | ---
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
library_name: transformers
model_name: dpo-v1
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for dpo-v1
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nomadrp/dpo-v1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.0.dev0
- Transformers: 4.48.2
- Pytorch: 2.2.0+cu118
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
filbertwijaya/Hokkien-Indonesian-Llama-2-Translator-7B-QLoRA-Adapters | filbertwijaya | "2025-04-19T18:02:22Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:Bohanlu/Taigi-Llama-2-Translator-7B",
"base_model:finetune:Bohanlu/Taigi-Llama-2-Translator-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-04-19T18:02:17Z" | ---
base_model: Bohanlu/Taigi-Llama-2-Translator-7B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** filbertwijaya
- **License:** apache-2.0
- **Finetuned from model :** Bohanlu/Taigi-Llama-2-Translator-7B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/TestLogica-Mathstral-7B-v0.1-GGUF | mradermacher | "2025-04-19T18:00:10Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:azservice/TestLogica-Mathstral-7B-v0.1",
"base_model:quantized:azservice/TestLogica-Mathstral-7B-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-19T17:04:33Z" | ---
base_model: azservice/TestLogica-Mathstral-7B-v0.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/azservice/TestLogica-Mathstral-7B-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TestLogica-Mathstral-7B-v0.1-GGUF/resolve/main/TestLogica-Mathstral-7B-v0.1.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/TestLogica-Mathstral-7B-v0.1-GGUF/resolve/main/TestLogica-Mathstral-7B-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/TestLogica-Mathstral-7B-v0.1-GGUF/resolve/main/TestLogica-Mathstral-7B-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TestLogica-Mathstral-7B-v0.1-GGUF/resolve/main/TestLogica-Mathstral-7B-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/TestLogica-Mathstral-7B-v0.1-GGUF/resolve/main/TestLogica-Mathstral-7B-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/TestLogica-Mathstral-7B-v0.1-GGUF/resolve/main/TestLogica-Mathstral-7B-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TestLogica-Mathstral-7B-v0.1-GGUF/resolve/main/TestLogica-Mathstral-7B-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TestLogica-Mathstral-7B-v0.1-GGUF/resolve/main/TestLogica-Mathstral-7B-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/TestLogica-Mathstral-7B-v0.1-GGUF/resolve/main/TestLogica-Mathstral-7B-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/TestLogica-Mathstral-7B-v0.1-GGUF/resolve/main/TestLogica-Mathstral-7B-v0.1.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TestLogica-Mathstral-7B-v0.1-GGUF/resolve/main/TestLogica-Mathstral-7B-v0.1.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/TestLogica-Mathstral-7B-v0.1-GGUF/resolve/main/TestLogica-Mathstral-7B-v0.1.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Paywinful/wav2vec2-large-xls-r-300m-akan-v4 | Paywinful | "2025-04-19T17:59:47Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-04-19T17:56:45Z" | ---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-akan-v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-akan-v4
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 2000
- training_steps: 20000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Darkhn/Rogue-Destiny-V2-Llama-3.3-70B | Darkhn | "2025-04-19T17:54:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:Nexesenex/Llama_3.3_70b_Wayfarer_Negative_fusion_v2",
"base_model:merge:Nexesenex/Llama_3.3_70b_Wayfarer_Negative_fusion_v2",
"base_model:ReadyArt/Forgotten-Abomination-70B-v5.0",
"base_model:merge:ReadyArt/Forgotten-Abomination-70B-v5.0",
"base_model:SentientAGI/Dobby-Unhinged-Llama-3.3-70B",
"base_model:merge:SentientAGI/Dobby-Unhinged-Llama-3.3-70B",
"base_model:Steelskull/L3.3-MS-Nevoria-70b",
"base_model:merge:Steelskull/L3.3-MS-Nevoria-70b",
"base_model:nbeerbower/Llama3.1-Gutenberg-Doppel-70B",
"base_model:merge:nbeerbower/Llama3.1-Gutenberg-Doppel-70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T17:16:59Z" | ---
base_model:
- SentientAGI/Dobby-Unhinged-Llama-3.3-70B
- ReadyArt/Forgotten-Abomination-70B-v5.0
- Nexesenex/Llama_3.3_70b_Wayfarer_Negative_fusion_v2
- nbeerbower/Llama3.1-Gutenberg-Doppel-70B
- Steelskull/L3.3-MS-Nevoria-70b
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Steelskull/L3.3-MS-Nevoria-70b](https://huggingface.co/Steelskull/L3.3-MS-Nevoria-70b) as a base.
### Models Merged
The following models were included in the merge:
* [SentientAGI/Dobby-Unhinged-Llama-3.3-70B](https://huggingface.co/SentientAGI/Dobby-Unhinged-Llama-3.3-70B)
* [ReadyArt/Forgotten-Abomination-70B-v5.0](https://huggingface.co/ReadyArt/Forgotten-Abomination-70B-v5.0)
* [Nexesenex/Llama_3.3_70b_Wayfarer_Negative_fusion_v2](https://huggingface.co/Nexesenex/Llama_3.3_70b_Wayfarer_Negative_fusion_v2)
* [nbeerbower/Llama3.1-Gutenberg-Doppel-70B](https://huggingface.co/nbeerbower/Llama3.1-Gutenberg-Doppel-70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ReadyArt/Forgotten-Abomination-70B-v5.0
- model: Steelskull/L3.3-MS-Nevoria-70b
- model: Nexesenex/Llama_3.3_70b_Wayfarer_Negative_fusion_v2
- model: SentientAGI/Dobby-Unhinged-Llama-3.3-70B
- model: nbeerbower/Llama3.1-Gutenberg-Doppel-70B
merge_method: model_stock
base_model: Steelskull/L3.3-MS-Nevoria-70b
out_dtype: bfloat16
chat_template: llama3
tokenizer:
source: base
```
|
mlsnr/bluebag | mlsnr | "2025-04-19T17:46:36Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] | text-to-image | "2025-04-19T17:46:27Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/balenciaga-aw-2025-bag.webp
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: unknown
---
# bluebag
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/mlsnr/bluebag/tree/main) them in the Files & versions tab.
|
Betha/fen_understanding_v1_r8 | Betha | "2025-04-19T17:46:36Z" | 85 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2025-04-15T18:56:26Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Themira/dual_encoder_xcsqa | Themira | "2025-04-19T17:45:27Z" | 0 | 0 | null | [
"pytorch",
"license:apache-2.0",
"region:us"
] | null | "2025-04-18T07:46:15Z" | ---
license: apache-2.0
---
|
JurisAnalyzer/A_legal | JurisAnalyzer | "2025-04-19T17:41:24Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-04-19T17:41:17Z" | ---
license: apache-2.0
---
|
tanya17/mt5-swahili-finetuned | tanya17 | "2025-04-19T17:37:10Z" | 7 | 1 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-04-18T12:25:07Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [TANYA TOMAR ]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf | RichardErkhov | "2025-04-19T17:35:34Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-19T16:10:08Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
mpg27_mistral7bv3_sft_ogd_rms_epoch1 - GGUF
- Model creator: https://huggingface.co/yjwon/
- Original model: https://huggingface.co/yjwon/mpg27_mistral7bv3_sft_ogd_rms_epoch1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q2_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q2_K.gguf) | Q2_K | 2.54GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.IQ3_XS.gguf) | IQ3_XS | 2.82GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.IQ3_S.gguf) | IQ3_S | 2.97GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q3_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q3_K.gguf) | Q3_K | 3.28GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.IQ4_XS.gguf) | IQ4_XS | 3.68GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q4_0.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q4_0.gguf) | Q4_0 | 3.83GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q4_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q4_K.gguf) | Q4_K | 4.07GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q4_1.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q4_1.gguf) | Q4_1 | 4.24GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q5_0.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q5_0.gguf) | Q5_0 | 4.66GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q5_K_S.gguf) | Q5_K_S | 4.66GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q5_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q5_K.gguf) | Q5_K | 4.78GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q5_1.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q5_1.gguf) | Q5_1 | 5.07GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q6_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q6_K.gguf) | Q6_K | 5.54GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q8_0.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
devngho/llama3-jamo-tokenizer | devngho | "2025-04-19T17:35:13Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-06T03:49:34Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
robertou2/task-7-microsoft-Phi-3-medium-128k-instruct | robertou2 | "2025-04-19T17:34:14Z" | 362 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3-medium-128k-instruct",
"base_model:adapter:microsoft/Phi-3-medium-128k-instruct",
"region:us"
] | null | "2025-04-17T18:31:26Z" | ---
base_model: microsoft/Phi-3-medium-128k-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
Ndr2444/nami_seaxy | Ndr2444 | "2025-04-19T17:32:51Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-04-19T17:32:51Z" | ---
license: apache-2.0
---
|
AirMannanov/llm-course-hw3-dora | AirMannanov | "2025-04-19T17:31:40Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-12T17:54:27Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sxsun1684/dpo-llama3-lora-pairrm | sxsun1684 | "2025-04-19T17:24:49Z" | 13 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-17T19:41:54Z" | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
# DPO Fine-Tuning Report: LLaMA 3.2 + PairRM Preference Dataset
## Model: `sxsun1684/dpo-llama3-lora-pairrm`
### 1. Overview
This document summarizes the process and configuration used to fine-tune the LLaMA 3.2 1B model using the **PairRM** preference dataset through Direct Preference Optimization (DPO) and PEFT (LoRA).
---
### 2. Objective
To improve the model's alignment with human preferences by fine-tuning it on pairwise preference data (chosen vs rejected responses) using DPO, leveraging PairRM-generated labels.
---
### 3. Dataset
- **Name**: `sxsun1684/pairrm-lima50-preferences`
- **Size**: 75 instruction pairs
- **Format**: Each example contains:
- `prompt`: the instruction/query
- `chosen`: the preferred response
- `rejected`: the less preferred response
---
### 4. Model Setup
- **Base Model**: `meta-llama/Llama-3.2-1B`
- **PEFT Method**: LoRA (Low-Rank Adaptation)
#### LoRA Configuration
```python
LoraConfig(
r=8,
lora_alpha=16,
bias="none",
task_type="CAUSAL_LM"
)
```
---
### 5. DPO Training Configuration
```python
DPOConfig(
beta=0.1,
learning_rate=2e-5,
per_device_train_batch_size=1,
gradient_accumulation_steps=8, # effective batch size = 8
num_train_epochs=3,
max_length=512,
save_strategy="epoch",
logging_steps=10,
push_to_hub=False,
report_to="none",
padding_value=tokenizer.pad_token_id
)
```
---
### 6. Preprocessing
- Each of `prompt`, `chosen`, and `rejected` was tokenized separately.
- Max lengths:
- Prompt: 128 tokens
- Chosen & Rejected: 384 tokens
- Padding: `max_length` with EOS as pad token
---
### 7. Training Notes
- Used `DPOTrainer` from `trl==0.16.1`
- No evaluation dataset (only training)
- Training completed in ~3 epochs without OOM errors on batch size 1
---
### 8. Output
- **Model saved to**: `sxsun1684/dpo-llama3-lora-pairrm`
- Contains fine-tuned LoRA adapters and tokenizer config
---
### 9. Suggested Use
You can load and use the model with:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("sxsun1684/dpo-llama3-lora-pairrm")
tokenizer = AutoTokenizer.from_pretrained("sxsun1684/dpo-llama3-lora-pairrm")
```
---
### 10. Next Steps
- Compare completions on novel instructions vs base LLaMA and LLM Judge DPO model
- Run qualitative/quantitative analysis of improvements
- Optionally deploy via Gradio or Hugging Face Spaces
---
### Author
SX Sun (sxsun1684)
2025-04
|
xw17/Llama-3.2-3B-Instruct_finetuned_4_optimized_lora_activity_origin | xw17 | "2025-04-19T17:22:37Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-19T17:22:24Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
abhinavm16104/TinyLlama-1.1B-qlora-mango | abhinavm16104 | "2025-04-19T17:21:12Z" | 0 | 0 | null | [
"safetensors",
"llama",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:mit",
"region:us"
] | null | "2025-04-18T22:13:00Z" | ---
license: mit
datasets:
- HuggingFaceH4/ultrachat_200k
language:
- en
metrics:
- perplexity
base_model:
- TinyLlama/TinyLlama-1.1B-Chat-v1.0
---
# ๐ TinyLlama-1.1B-qlora-mango
A fine-tuned version of the [TinyLlama-1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) model using QLoRA on a custom prompt-response dataset, [Ultrachat200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k).
---
## Model Details
- **Base Model**: TinyLlama-1.1B-Chat
- **Tuning Method**: QLoRA (Quantized Low-Rank Adaptation)
- **Use Case**: Instruction-following / Chatbot generation
- **Tokenizer**: TinyLlama tokenizer
- **Framework**: Hugging Face Transformers
---
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("abhinavm16104/TinyLlama-1.1B-qlora-mango")
model = AutoModelForCausalLM.from_pretrained("abhinavm16104/TinyLlama-1.1B-qlora-mango")
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
prompt = "<|user|>\nTell me something about mangoes.</s>\n<|assistant|>"
print(pipe(prompt)[0]["generated_text"])
```
## Example Prompt
```text
<|user|>
Tell me something about mangoes.</s>
<|assistant|>
Mangoes are a type of fruit that originated in Southeast Asia and are now grown in many parts of the world...
```
## Citation
If you use tinyllama-1.1B-qlora-mango in your work, please cite the author:
```
@misc {tinyllama-1.1B-qlora-mango,
author = {Abhinav Mangalore},
title = {TinyLlama-1.1B-qlora-mango},
year = {2025},
url = {https://huggingface.co/abhinavm16104/TinyLlama-1.1B-qlora-mango}
}
```` |
Subsets and Splits