modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-16 06:27:54
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 522
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-16 06:27:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mohammadmahdinouri/albert-init | mohammadmahdinouri | 2025-05-28T23:16:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2025-05-28T23:14:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
WooseTheMoose/Pixel-Art-Fire-Emblem-GBA-LoRA | WooseTheMoose | 2025-05-28T23:15:45Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:John6666/noobai-xl-nai-xl-epsilonpred11version-sdxl",
"base_model:adapter:John6666/noobai-xl-nai-xl-epsilonpred11version-sdxl",
"region:us"
]
| text-to-image | 2025-05-28T23:15:42Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '{'
output:
url: images/1836-pixel art, 16-bit, limited colors, retro-EIH-741479186-1.png
- text: '{'
output:
url: images/1845-pixel art, 16-bit, limited colors, retro-EIH-981797439.png
- text: '{'
output:
url: images/1855-pixel art, 16-bit, limited colors, retro-EIH-1053650129.png
- text: '{'
output:
url: images/1853-pixel art, 16-bit, limited colors, retro-EIH-1248038887.png
base_model: John6666/noobai-xl-nai-xl-epsilonpred11version-sdxl
instance_prompt: null
---
# GBAFE
<Gallery />
## Model description
I like to add pixel art, 16-bit, retro game, to the prompt at the beginning to ensure it generates pixel art consistently. I may need to retrain it to make it more consistent though.
## Download model
Weights for this model are available in Safetensors format.
[Download](/WooseTheMoose/Pixel-Art-Fire-Emblem-GBA-LoRA/tree/main) them in the Files & versions tab.
|
RodrigoR07/smolvmfinetunebasemodelbalanceadomodel | RodrigoR07 | 2025-05-28T23:15:03Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:HuggingFaceTB/SmolVLM-Base",
"base_model:adapter:HuggingFaceTB/SmolVLM-Base",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-28T23:14:55Z | ---
library_name: peft
license: apache-2.0
base_model: HuggingFaceTB/SmolVLM-Base
tags:
- generated_from_trainer
model-index:
- name: smolvmfinetunebasemodelbalanceadomodel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolvmfinetunebasemodelbalanceadomodel
This model is a fine-tuned version of [HuggingFaceTB/SmolVLM-Base](https://huggingface.co/HuggingFaceTB/SmolVLM-Base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0810
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1946 | 0.9863 | 36 | 0.6281 |
| 0.3657 | 1.9863 | 72 | 0.2049 |
| 0.2117 | 2.9863 | 108 | 0.1394 |
| 0.1613 | 3.9863 | 144 | 0.1134 |
| 0.1341 | 4.9863 | 180 | 0.0964 |
| 0.1095 | 5.9863 | 216 | 0.0885 |
| 0.0914 | 6.9863 | 252 | 0.0827 |
| 0.0772 | 7.9863 | 288 | 0.0807 |
| 0.0667 | 8.9863 | 324 | 0.0810 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu121
- Datasets 3.6.0
- Tokenizers 0.21.1 |
while0628/student_model_epoch240 | while0628 | 2025-05-28T23:09:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T23:06:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Saef/mistral_attention_new-checkpoint-24100 | Saef | 2025-05-28T23:07:06Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
]
| null | 2025-05-28T23:06:37Z | ---
base_model: mistralai/Mistral-7B-v0.1
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
vermoney/2c083d55-ac33-4f69-aa39-e0764c97e8e2 | vermoney | 2025-05-28T23:04:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:adapter:codellama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-28T22:23:23Z | ---
library_name: peft
license: llama2
base_model: codellama/CodeLlama-7b-Instruct-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2c083d55-ac33-4f69-aa39-e0764c97e8e2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: codellama/CodeLlama-7b-Instruct-hf
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- cf9e35bda9ac1e44_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 3
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: vermoney/2c083d55-ac33-4f69-aa39-e0764c97e8e2
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 96
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 48
lora_target_linear: true
lr_scheduler: cosine
max_steps: 280
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/cf9e35bda9ac1e44_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d444ccbf-1904-491d-9e28-e4e4f984e6ad
wandb_project: s56-9
wandb_run: your_name
wandb_runid: d444ccbf-1904-491d-9e28-e4e4f984e6ad
warmup_steps: 40
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# 2c083d55-ac33-4f69-aa39-e0764c97e8e2
This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 18
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 40
- training_steps: 280
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3396 | 0.0107 | 280 | 1.0181 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Saef/mistral_attention-checkpoint-48100 | Saef | 2025-05-28T23:03:17Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
]
| null | 2025-05-28T23:02:50Z | ---
base_model: mistralai/Mistral-7B-v0.1
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
Luandrie/_Whisper_Call_Center_NamesAdded_1000 | Luandrie | 2025-05-28T23:03:13Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:lelapa/Names_Accents",
"base_model:lelapa/distill_whisper_call_center_en_merged",
"base_model:finetune:lelapa/distill_whisper_call_center_en_merged",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2025-05-28T19:03:52Z | ---
library_name: transformers
language:
- en
license: mit
base_model: lelapa/distill_whisper_call_center_en_merged
tags:
- generated_from_trainer
datasets:
- lelapa/Names_Accents
metrics:
- wer
model-index:
- name: Distill Whisper Call Center NER 1000
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: lelapa/Names_Accents
type: lelapa/Names_Accents
args: 'config: en, split: test'
metrics:
- name: Wer
type: wer
value: 13.118279569892474
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Distill Whisper Call Center NER 1000
This model is a fine-tuned version of [lelapa/distill_whisper_call_center_en_merged](https://huggingface.co/lelapa/distill_whisper_call_center_en_merged) on the lelapa/Names_Accents dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1828
- Wer: 13.1183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 256
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 125
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 1.3331 | 2.9412 | 100 | 0.4857 | 39.1828 |
| 0.1568 | 5.8824 | 200 | 0.2184 | 17.1613 |
| 0.0243 | 8.8235 | 300 | 0.1941 | 15.2258 |
| 0.0059 | 11.7647 | 400 | 0.1830 | 13.9355 |
| 0.0019 | 14.7059 | 500 | 0.1823 | 13.8065 |
| 0.001 | 17.6471 | 600 | 0.1819 | 13.3763 |
| 0.0008 | 20.5882 | 700 | 0.1823 | 13.2043 |
| 0.0007 | 23.5294 | 800 | 0.1825 | 13.2043 |
| 0.0006 | 26.4706 | 900 | 0.1828 | 13.2043 |
| 0.0006 | 29.4118 | 1000 | 0.1828 | 13.1183 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.20.3
|
rayonlabs/hf-autotrain-2025-05-28-17-f5547840 | rayonlabs | 2025-05-28T23:00:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"dataset:rayonlabs/autotrain-data-hf-autotrain-2025-05-28-17-f5547840",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T17:24:23Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: unsloth/Meta-Llama-3.1-8B
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
datasets:
- rayonlabs/autotrain-data-hf-autotrain-2025-05-28-17-f5547840
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
lmstudio-community/medgemma-4b-it-GGUF | lmstudio-community | 2025-05-28T23:00:40Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"image-text-to-text",
"base_model:google/medgemma-4b-it",
"base_model:quantized:google/medgemma-4b-it",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
]
| image-text-to-text | 2025-05-28T22:07:17Z | ---
license: other
license_name: health-ai-developer-foundations
license_link: https://developers.google.com/health-ai-developer-foundations/terms
library_name: transformers
pipeline_tag: image-text-to-text
base_model:
- google/medgemma-4b-it
---
## 💫 Community Model
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
HAI-DEF is provided under and subject to the Health AI Developer Foundations Terms of Use found at https://developers.google.com/health-ai-developer-foundations/terms |
greenwich157/phi4-base-telcollm-d-Q8_0-GGUF | greenwich157 | 2025-05-28T22:55:42Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:greenwich157/phi4-base-telcollm-d",
"base_model:quantized:greenwich157/phi4-base-telcollm-d",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T22:54:40Z | ---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: greenwich157/phi4-base-telcollm-d
---
# greenwich157/phi4-base-telcollm-d-Q8_0-GGUF
This model was converted to GGUF format from [`greenwich157/phi4-base-telcollm-d`](https://huggingface.co/greenwich157/phi4-base-telcollm-d) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/greenwich157/phi4-base-telcollm-d) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo greenwich157/phi4-base-telcollm-d-Q8_0-GGUF --hf-file phi4-base-telcollm-d-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo greenwich157/phi4-base-telcollm-d-Q8_0-GGUF --hf-file phi4-base-telcollm-d-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo greenwich157/phi4-base-telcollm-d-Q8_0-GGUF --hf-file phi4-base-telcollm-d-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo greenwich157/phi4-base-telcollm-d-Q8_0-GGUF --hf-file phi4-base-telcollm-d-q8_0.gguf -c 2048
```
|
jmjm123/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-clawed_rugged_viper | jmjm123 | 2025-05-28T22:54:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am clawed rugged viper",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-01T09:40:33Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-clawed_rugged_viper
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am clawed rugged viper
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-clawed_rugged_viper
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jmjm123/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-clawed_rugged_viper", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-i1-GGUF | mradermacher | 2025-05-28T22:51:43Z | 338 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"mistral",
"en",
"base_model:DavidAU/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B",
"base_model:quantized:DavidAU/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| null | 2025-01-01T14:02:03Z | ---
base_model: DavidAU/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
- mistral
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DavidAU/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-i1-GGUF/resolve/main/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B.i1-IQ1_S.gguf) | i1-IQ1_S | 5.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-i1-GGUF/resolve/main/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B.i1-IQ1_M.gguf) | i1-IQ1_M | 5.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-i1-GGUF/resolve/main/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-i1-GGUF/resolve/main/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-i1-GGUF/resolve/main/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B.i1-IQ2_S.gguf) | i1-IQ2_S | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-i1-GGUF/resolve/main/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B.i1-IQ2_M.gguf) | i1-IQ2_M | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-i1-GGUF/resolve/main/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 8.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-i1-GGUF/resolve/main/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B.i1-Q2_K.gguf) | i1-Q2_K | 8.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-i1-GGUF/resolve/main/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-i1-GGUF/resolve/main/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-i1-GGUF/resolve/main/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-i1-GGUF/resolve/main/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B.i1-IQ3_S.gguf) | i1-IQ3_S | 10.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-i1-GGUF/resolve/main/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B.i1-IQ3_M.gguf) | i1-IQ3_M | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-i1-GGUF/resolve/main/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-i1-GGUF/resolve/main/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-i1-GGUF/resolve/main/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-i1-GGUF/resolve/main/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B.i1-Q4_0.gguf) | i1-Q4_0 | 13.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-i1-GGUF/resolve/main/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-i1-GGUF/resolve/main/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-i1-GGUF/resolve/main/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B.i1-Q4_1.gguf) | i1-Q4_1 | 15.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-i1-GGUF/resolve/main/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-i1-GGUF/resolve/main/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 17.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-i1-GGUF/resolve/main/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B.i1-Q6_K.gguf) | i1-Q6_K | 19.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
while0628/student_model_data8000_epoch22 | while0628 | 2025-05-28T22:50:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T22:47:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
unrented5443/sn11-x2-5-1 | unrented5443 | 2025-05-28T22:49:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-28T20:25:36Z | ---
base_model: google/gemma-3-27b-it
library_name: transformers
tags:
- generated_from_trainer
- trl
- sft
licence: license
license: gemma
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/64d1129297ca59bcf7458d07/zgFDl7UvWhiPYqdote7XT.png" width="400">
# Model Card for Synthia-S1-27b
**Community Page**: [Tesslate Community](https://discord.gg/DkzMzwBTaw), Website: [Tesslate](https://tesslate.com)
**Creative Writing Samples**: [Sample creative output](https://www.notion.so/Synthia-S1-Creative-Writing-Samples-1ca93ce17c2580c09397fa750d402e71)
**Authors**: Tesslate
## Model Information
### Description
Synthia-S1-27b is a reasoning, AI model developed by Tesslate AI, fine-tuned specifically for advanced reasoning, coding, and RP use cases. Built upon the robust Gemma3 architecture, Synthia-S1-27b excels in logical reasoning, creative writing, and deep contextual understanding. It supports multimodal inputs (text and images) with a large 128K token context window, enabling complex analysis suitable for research, academic tasks, and enterprise-grade AI applications.
### KEY PARAMS TO RUN:
#### Creative Writing System Prompt:
```
Your function as an assistant is to thoughtfully navigate inquiries by engaging in an in-depth, imaginative reasoning journey before arriving at a clear, accurate response. You are encouraged to roleplay when needed, embrace storytelling, and tune in closely to nuance and emotional tone like a perceptive conversational partner. Your approach should include a wide arc of contemplation, including interpretation, synthesis, creative ideation, critical re-evaluation, memory retrieval, and thoughtful iteration to shape a layered and expressive process of discovery. Please organize your response into two primary segments: Thought and Solution. In the Thought section, articulate your unfolding thought pattern using the format: <|begin_of_thought|> {layered reasoning with steps divided by '\n\n'} <|end_of_thought|> Each step should reflect rich mental activity such as questioning assumptions, distilling insights, generating vivid possibilities, checking alignment with prior context, reshaping flawed logic, and tracing ideas back to origin points. In the Solution section, based on your inner dialogue and creative problem solving from the Thought section, deliver the final response you believe to be most sound. The output should be expressed in a direct, coherent, and exact form that includes the vital steps needed to reach your conclusion, using this structure: <|begin_of_solution|> {final precise, neatly arranged, and insightful answer} <|end_of_solution|> Now, let’s explore the following prompt using this guided method:
```
#### Reasoning System Prompt:
```
Your role as an assistant is to engage in deep, methodical reasoning and provide comprehensive, accurate solutions. Before arriving at a final answer, you must undertake a structured, multi-phase thinking process that emphasizes depth, verification, and clarity. This involves thoroughly analyzing the question, identifying key elements, summarizing relevant insights, generating hypotheses, iteratively refining thoughts, verifying assumptions, cross-checking with prior knowledge, and reevaluating earlier conclusions as necessary. Your response must be structured into two main sections: Thought and Solution. In the Thought section, rigorously document your reasoning in the following format: <|begin_of_thought|> {thought process with each logical step separated by '\n\n'} <|end_of_thought|>. Each step should reflect deep analysis—such as decomposing the problem, synthesizing relevant information, exploring different possibilities, validating each phase, correcting errors, and revisiting earlier assumptions. In the Solution section, consolidate all your insights and reasoned steps into a concise, well-structured final answer. Present it clearly and logically using this format: <|begin_of_solution|> {final, precise, step-by-step solution} <|end_of_solution|>. This approach ensures that the final output reflects a high-confidence answer that results from critical thinking and iteration. Now, try to solve the following question through the above guidelines:
```
#### Coding System Prompt:
```
Your role as a coding assistant is to approach each problem with a rigorous, structured reasoning process that leads to accurate, maintainable, and efficient code. Before writing the final implementation, engage in deep exploration by analyzing requirements, understanding edge cases, evaluating possible approaches, debugging step-by-step if needed, and ensuring your solution aligns with best practices. Structure your response into two main sections: Thought and Solution. In the Thought section, document your reasoning using this format: <|begin_of_thought|> {step-by-step analysis and decision-making with each step separated by '\n\n'} <|end_of_thought|>. Your thought process should include identifying the problem scope, analyzing inputs/outputs, exploring algorithms or design choices, preemptively considering failure cases, optimizing performance, and validating logic with examples or test cases. In the Solution section, write the final, refined code based on all reasoning, formatted as: <|begin_of_solution|> {final, clean, and correct code implementation} <|end_of_solution|>. This structure ensures the code is well-reasoned, properly scoped, and production-ready. Now, try to solve the following coding task using the above guidelines:
```
Please use `temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0` with repeat penalty set to 1.3
OR (recommended)
`Temperature = 0.7, top_k = 40, repeat penalty = 1.1, top_p = 0.95, min_p = 0.05` with a rolling window.
### Inputs and Outputs
* **Input:**
* Text prompts for questions, instructions, coding tasks, or summarizations
* Total input context of 128K tokens
* **Output:**
* Reasoned and structured text outputs
* Maximum output length of 8192 tokens
## Key Metrics
Synthia-S1-27b achieves around +10-20% on most benchmarks, notably higher in improvement.
I scaled down each benchmark listed to complete those and I averaged these numbers, but I can't verifiably put that I did the whole giant benchmark for each. (Ran out of budget + I'm running everything on a 4090 now) Hopefully I can get some community help in benchmarking.
GPQA Diamond (198 questions) -> 57%, one shot (improved from 24.3 on Gemma 3 PT 27B)
MMLU Pro (15% of the entire set) -> 75%, averaged, more details here: [output](https://pastebin.com/kmcYzALq) (beating Gemma 3 PT 27B at 67.5)
Based on this assessment and heavy coding in the dataset, I'm making this claim. Ofc, I'm happy to be wrong and go back to the drawing board.
## Usage
Install the latest version of Transformers (>=4.50.0):
```Shell
pip install -U transformers
```
### Running with Pipeline API
```Python
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="tesslate/synthia-s1-27b",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful, reasoning-focused assistant."}]},
{"role": "user", "content": [
{"type": "image", "url": "https://example.com/sample.jpg"},
{"type": "text", "text": "Explain the image."}
]}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
```
## Training Data
Synthia-S1-27b was trained on diverse data including:
* Multiple web documents
* Programming debugging and solutions
* Mathematical solutions and thinking steps
Synthia-S1-27b was trained on an A100 for 205+ hours, with multiple rounds of sft and rl.
## Model Architecture
* **Base Model**: Gemma3
* **Size**: 27 billion parameters
* **Type**: Decoder-only Transformer
* **Precision**: bf16 with int8 quantization
* **Training Objective**: Instruction tuning emphasizing reasoning, coding tasks, and factual accuracy
## Quantized Models
* [Synthia-S1-27b-Q4_K_M-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q4_K_M-GGUF)
* [Synthia-S1-27b-Q8_0-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q8_0-GGUF)
## Limitations
* May require detailed prompt engineering for highly specific tasks
* Occasional hallucinations in less-explored domains
## Citation
```bibtex
@misc{tesslate_synthias127b,
title={Synthia-S1-27b: Advanced Reasoning and Coding Model},
author={tesslate},
year={2025},
publisher={tesslate},
url={https://tesslate.com}
}
```
**Developed by Tesslate** **[Huggingface](https://huggingface.co/tesslate)** **|** **[Website](https://tesslate.com)**
[Image Source](https://pixabay.com/illustrations/girl-backpack-night-surreal-sky-8257551/) |
while0628/student_model_epoch220 | while0628 | 2025-05-28T22:49:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T22:46:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv7-3-gguf | RichardErkhov | 2025-05-28T22:43:59Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T21:10:46Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
GPT2XL_RLLMv7-3 - GGUF
- Model creator: https://huggingface.co/migueldeguzmandev/
- Original model: https://huggingface.co/migueldeguzmandev/GPT2XL_RLLMv7-3/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [GPT2XL_RLLMv7-3.Q2_K.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv7-3-gguf/blob/main/GPT2XL_RLLMv7-3.Q2_K.gguf) | Q2_K | 0.8GB |
| [GPT2XL_RLLMv7-3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv7-3-gguf/blob/main/GPT2XL_RLLMv7-3.IQ3_XS.gguf) | IQ3_XS | 0.8GB |
| [GPT2XL_RLLMv7-3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv7-3-gguf/blob/main/GPT2XL_RLLMv7-3.IQ3_S.gguf) | IQ3_S | 0.8GB |
| [GPT2XL_RLLMv7-3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv7-3-gguf/blob/main/GPT2XL_RLLMv7-3.Q3_K_S.gguf) | Q3_K_S | 0.8GB |
| [GPT2XL_RLLMv7-3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv7-3-gguf/blob/main/GPT2XL_RLLMv7-3.IQ3_M.gguf) | IQ3_M | 0.87GB |
| [GPT2XL_RLLMv7-3.Q3_K.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv7-3-gguf/blob/main/GPT2XL_RLLMv7-3.Q3_K.gguf) | Q3_K | 0.92GB |
| [GPT2XL_RLLMv7-3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv7-3-gguf/blob/main/GPT2XL_RLLMv7-3.Q3_K_M.gguf) | Q3_K_M | 0.92GB |
| [GPT2XL_RLLMv7-3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv7-3-gguf/blob/main/GPT2XL_RLLMv7-3.Q3_K_L.gguf) | Q3_K_L | 0.99GB |
| [GPT2XL_RLLMv7-3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv7-3-gguf/blob/main/GPT2XL_RLLMv7-3.IQ4_XS.gguf) | IQ4_XS | 0.86GB |
| [GPT2XL_RLLMv7-3.Q4_0.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv7-3-gguf/blob/main/GPT2XL_RLLMv7-3.Q4_0.gguf) | Q4_0 | 0.86GB |
| [GPT2XL_RLLMv7-3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv7-3-gguf/blob/main/GPT2XL_RLLMv7-3.IQ4_NL.gguf) | IQ4_NL | 0.87GB |
| [GPT2XL_RLLMv7-3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv7-3-gguf/blob/main/GPT2XL_RLLMv7-3.Q4_K_S.gguf) | Q4_K_S | 0.99GB |
| [GPT2XL_RLLMv7-3.Q4_K.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv7-3-gguf/blob/main/GPT2XL_RLLMv7-3.Q4_K.gguf) | Q4_K | 1.06GB |
| [GPT2XL_RLLMv7-3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv7-3-gguf/blob/main/GPT2XL_RLLMv7-3.Q4_K_M.gguf) | Q4_K_M | 1.06GB |
| [GPT2XL_RLLMv7-3.Q4_1.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv7-3-gguf/blob/main/GPT2XL_RLLMv7-3.Q4_1.gguf) | Q4_1 | 0.95GB |
| [GPT2XL_RLLMv7-3.Q5_0.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv7-3-gguf/blob/main/GPT2XL_RLLMv7-3.Q5_0.gguf) | Q5_0 | 1.04GB |
| [GPT2XL_RLLMv7-3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv7-3-gguf/blob/main/GPT2XL_RLLMv7-3.Q5_K_S.gguf) | Q5_K_S | 1.09GB |
| [GPT2XL_RLLMv7-3.Q5_K.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv7-3-gguf/blob/main/GPT2XL_RLLMv7-3.Q5_K.gguf) | Q5_K | 1.23GB |
| [GPT2XL_RLLMv7-3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv7-3-gguf/blob/main/GPT2XL_RLLMv7-3.Q5_K_M.gguf) | Q5_K_M | 1.23GB |
| [GPT2XL_RLLMv7-3.Q5_1.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv7-3-gguf/blob/main/GPT2XL_RLLMv7-3.Q5_1.gguf) | Q5_1 | 1.12GB |
| [GPT2XL_RLLMv7-3.Q6_K.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv7-3-gguf/blob/main/GPT2XL_RLLMv7-3.Q6_K.gguf) | Q6_K | 1.44GB |
| [GPT2XL_RLLMv7-3.Q8_0.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv7-3-gguf/blob/main/GPT2XL_RLLMv7-3.Q8_0.gguf) | Q8_0 | 1.55GB |
Original model description:
RLLMv7 / This experiment: [Can RLLMv3's ability to defend against jailbreaks be attributed to datasets containing stories about Jung's shadow integration theory?](https://www.lesswrong.com/posts/Rc6hb48nq38QrQ7qb/can-rllmv3-s-ability-to-defend-against-jailbreaks-be)
GPT2XL_RLLMv3 Post: [BetterDAN, AI Machiavelli & Oppo Jailbreaks vs. SOTA models & GPT2XL_RLLMv3](https://www.lesswrong.com/posts/vZ5fM6FtriyyKbwi9/betterdan-ai-machiavelli-and-oppo-jailbreaks-vs-sota-models?utm_campaign=post_share&utm_source=link)
Related post: [Coherence (and Response Time) Test](https://docs.google.com/document/d/1D235vN2KwsLIUKCySpKJoDLV7qwYcU-LSSDpFCbMljs/edit?usp=sharing)
Another Related Post: [Research Log, RLLMv3 (GPT2-XL, Phi-1.5 and Falcon-RW-1B)](https://www.lesswrong.com/posts/EiEhYmYsvYCRgCemH/research-log-rllmv3-gpt2-xl-phi-1-5-and-falcon-rw-1b?utm_campaign=post_share&utm_source=link)
|
arielcerdap/modernbert-base-multiclass-disfluency | arielcerdap | 2025-05-28T22:42:43Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"modernbert",
"token-classification",
"disfluency-detection",
"speech-pathology",
"en",
"dataset:disfluency-dataset",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2025-05-28T22:31:46Z | ---
language: en
tags:
- disfluency-detection
- token-classification
- modernbert
- speech-pathology
datasets:
- disfluency-dataset
metrics:
- accuracy
- f1
model-index:
- name: ModernBERT Multiclass Disfluency Detection
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: Disfluency Dataset
type: custom
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9525
- name: F1
type: f1
value: 0.9027
library_name: transformers
---
# ModernBERT Multiclass Disfluency Detection
This model is fine-tuned from [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) for multi-class disfluency detection in spoken language.
## Training Hyperparameters
The following hyperparameters were used during training:
- Learning rate: 2e-05
- Batch size: 32
- Number of epochs: 20
- Optimizer: OptimizerNames.ADAMW_8BIT
- LR scheduler type: SchedulerType.COSINE
- Warmup ratio: 0.1 |
ivnle/s1-20250528_153645 | ivnle | 2025-05-28T22:42:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T22:37:14Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: s1-20250528_153645
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for s1-20250528_153645
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ivnle/s1-20250528_153645", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ivnle/s1-codex/runs/ml6v16nf)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.1
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
habapchan/Qwen3-KorMedMCQA-4B | habapchan | 2025-05-28T22:41:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T22:38:11Z | ---
base_model: unsloth/qwen3-4b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** habapchan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv10-5-gguf | RichardErkhov | 2025-05-28T22:40:16Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T21:10:17Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
GPT2XL_RLLMv10-5 - GGUF
- Model creator: https://huggingface.co/migueldeguzmandev/
- Original model: https://huggingface.co/migueldeguzmandev/GPT2XL_RLLMv10-5/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [GPT2XL_RLLMv10-5.Q2_K.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv10-5-gguf/blob/main/GPT2XL_RLLMv10-5.Q2_K.gguf) | Q2_K | 0.8GB |
| [GPT2XL_RLLMv10-5.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv10-5-gguf/blob/main/GPT2XL_RLLMv10-5.IQ3_XS.gguf) | IQ3_XS | 0.8GB |
| [GPT2XL_RLLMv10-5.IQ3_S.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv10-5-gguf/blob/main/GPT2XL_RLLMv10-5.IQ3_S.gguf) | IQ3_S | 0.8GB |
| [GPT2XL_RLLMv10-5.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv10-5-gguf/blob/main/GPT2XL_RLLMv10-5.Q3_K_S.gguf) | Q3_K_S | 0.8GB |
| [GPT2XL_RLLMv10-5.IQ3_M.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv10-5-gguf/blob/main/GPT2XL_RLLMv10-5.IQ3_M.gguf) | IQ3_M | 0.87GB |
| [GPT2XL_RLLMv10-5.Q3_K.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv10-5-gguf/blob/main/GPT2XL_RLLMv10-5.Q3_K.gguf) | Q3_K | 0.92GB |
| [GPT2XL_RLLMv10-5.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv10-5-gguf/blob/main/GPT2XL_RLLMv10-5.Q3_K_M.gguf) | Q3_K_M | 0.92GB |
| [GPT2XL_RLLMv10-5.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv10-5-gguf/blob/main/GPT2XL_RLLMv10-5.Q3_K_L.gguf) | Q3_K_L | 0.99GB |
| [GPT2XL_RLLMv10-5.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv10-5-gguf/blob/main/GPT2XL_RLLMv10-5.IQ4_XS.gguf) | IQ4_XS | 0.86GB |
| [GPT2XL_RLLMv10-5.Q4_0.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv10-5-gguf/blob/main/GPT2XL_RLLMv10-5.Q4_0.gguf) | Q4_0 | 0.86GB |
| [GPT2XL_RLLMv10-5.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv10-5-gguf/blob/main/GPT2XL_RLLMv10-5.IQ4_NL.gguf) | IQ4_NL | 0.87GB |
| [GPT2XL_RLLMv10-5.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv10-5-gguf/blob/main/GPT2XL_RLLMv10-5.Q4_K_S.gguf) | Q4_K_S | 0.99GB |
| [GPT2XL_RLLMv10-5.Q4_K.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv10-5-gguf/blob/main/GPT2XL_RLLMv10-5.Q4_K.gguf) | Q4_K | 1.06GB |
| [GPT2XL_RLLMv10-5.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv10-5-gguf/blob/main/GPT2XL_RLLMv10-5.Q4_K_M.gguf) | Q4_K_M | 1.06GB |
| [GPT2XL_RLLMv10-5.Q4_1.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv10-5-gguf/blob/main/GPT2XL_RLLMv10-5.Q4_1.gguf) | Q4_1 | 0.95GB |
| [GPT2XL_RLLMv10-5.Q5_0.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv10-5-gguf/blob/main/GPT2XL_RLLMv10-5.Q5_0.gguf) | Q5_0 | 1.04GB |
| [GPT2XL_RLLMv10-5.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv10-5-gguf/blob/main/GPT2XL_RLLMv10-5.Q5_K_S.gguf) | Q5_K_S | 1.09GB |
| [GPT2XL_RLLMv10-5.Q5_K.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv10-5-gguf/blob/main/GPT2XL_RLLMv10-5.Q5_K.gguf) | Q5_K | 1.23GB |
| [GPT2XL_RLLMv10-5.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv10-5-gguf/blob/main/GPT2XL_RLLMv10-5.Q5_K_M.gguf) | Q5_K_M | 1.23GB |
| [GPT2XL_RLLMv10-5.Q5_1.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv10-5-gguf/blob/main/GPT2XL_RLLMv10-5.Q5_1.gguf) | Q5_1 | 1.12GB |
| [GPT2XL_RLLMv10-5.Q6_K.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv10-5-gguf/blob/main/GPT2XL_RLLMv10-5.Q6_K.gguf) | Q6_K | 1.44GB |
| [GPT2XL_RLLMv10-5.Q8_0.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv10-5-gguf/blob/main/GPT2XL_RLLMv10-5.Q8_0.gguf) | Q8_0 | 1.55GB |
Original model description:
---
license: mit
---
[Results: RLLMv10 Experiment](https://www.lesswrong.com/posts/x5ySDLEsJdtdmR7nX/rllmv10-experiment)
[More info? see RLLM virtual map!](https://whimsical.com/rllm-visual-map-QQvFHNr6aVDdXRUnyb5NCu)
|
AmberYifan/Llama-3.1-8B-sft-SPIN-Llama-3.1-70B-Instruct-IPO | AmberYifan | 2025-05-28T22:39:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T17:52:50Z | ---
base_model: AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Llama-3.1-8B-sft-SPIN-Llama-3.1-70B-Instruct-IPO
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Llama-3.1-8B-sft-SPIN-Llama-3.1-70B-Instruct-IPO
This model is a fine-tuned version of [AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Llama-3.1-8B-sft-SPIN-Llama-3.1-70B-Instruct-IPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/2rr0fhuz)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
yalhessi/lemexp-task1-v2-template_small_notypes-deepseek-coder-1.3b-base-ddp-8lr-v2 | yalhessi | 2025-05-28T22:39:49Z | 256 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:deepseek-ai/deepseek-coder-1.3b-base",
"base_model:adapter:deepseek-ai/deepseek-coder-1.3b-base",
"license:other",
"region:us"
]
| null | 2025-04-29T10:12:32Z | ---
library_name: peft
license: other
base_model: deepseek-ai/deepseek-coder-1.3b-base
tags:
- generated_from_trainer
model-index:
- name: lemexp-task1-v2-template_small_notypes-deepseek-coder-1.3b-base-ddp-8lr-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lemexp-task1-v2-template_small_notypes-deepseek-coder-1.3b-base-ddp-8lr-v2
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1711
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:-----:|:---------------:|
| 0.4293 | 0.2001 | 720 | 0.3388 |
| 0.324 | 0.4001 | 1440 | 0.2977 |
| 0.282 | 0.6002 | 2160 | 0.2814 |
| 0.2707 | 0.8002 | 2880 | 0.2713 |
| 0.2595 | 1.0003 | 3600 | 0.2822 |
| 0.2434 | 1.2003 | 4320 | 0.2586 |
| 0.2403 | 1.4004 | 5040 | 0.2458 |
| 0.2355 | 1.6004 | 5760 | 0.2513 |
| 0.2342 | 1.8005 | 6480 | 0.2404 |
| 0.2284 | 2.0006 | 7200 | 0.2346 |
| 0.2176 | 2.2006 | 7920 | 0.2339 |
| 0.2148 | 2.4007 | 8640 | 0.2315 |
| 0.2127 | 2.6007 | 9360 | 0.2236 |
| 0.2108 | 2.8008 | 10080 | 0.2300 |
| 0.2124 | 3.0008 | 10800 | 0.2186 |
| 0.1962 | 3.2009 | 11520 | 0.2232 |
| 0.1993 | 3.4009 | 12240 | 0.2160 |
| 0.1944 | 3.6010 | 12960 | 0.2141 |
| 0.1945 | 3.8011 | 13680 | 0.2150 |
| 0.1934 | 4.0011 | 14400 | 0.2132 |
| 0.182 | 4.2012 | 15120 | 0.2050 |
| 0.1817 | 4.4012 | 15840 | 0.2079 |
| 0.1809 | 4.6013 | 16560 | 0.2013 |
| 0.1805 | 4.8013 | 17280 | 0.2045 |
| 0.1768 | 5.0014 | 18000 | 0.1979 |
| 0.1661 | 5.2014 | 18720 | 0.1919 |
| 0.1673 | 5.4015 | 19440 | 0.1962 |
| 0.1679 | 5.6016 | 20160 | 0.1925 |
| 0.168 | 5.8016 | 20880 | 0.1873 |
| 0.1623 | 6.0017 | 21600 | 0.1869 |
| 0.155 | 6.2017 | 22320 | 0.1875 |
| 0.1551 | 6.4018 | 23040 | 0.1869 |
| 0.1521 | 6.6018 | 23760 | 0.1870 |
| 0.1536 | 6.8019 | 24480 | 0.1816 |
| 0.1506 | 7.0019 | 25200 | 0.1825 |
| 0.1417 | 7.2020 | 25920 | 0.1867 |
| 0.1405 | 7.4021 | 26640 | 0.1795 |
| 0.1409 | 7.6021 | 27360 | 0.1808 |
| 0.1384 | 7.8022 | 28080 | 0.1754 |
| 0.1409 | 8.0022 | 28800 | 0.1767 |
| 0.1271 | 8.2023 | 29520 | 0.1753 |
| 0.1258 | 8.4023 | 30240 | 0.1742 |
| 0.1279 | 8.6024 | 30960 | 0.1737 |
| 0.126 | 8.8024 | 31680 | 0.1709 |
| 0.1255 | 9.0025 | 32400 | 0.1688 |
| 0.1138 | 9.2026 | 33120 | 0.1734 |
| 0.1134 | 9.4026 | 33840 | 0.1709 |
| 0.1149 | 9.6027 | 34560 | 0.1697 |
| 0.1143 | 9.8027 | 35280 | 0.1681 |
| 0.1106 | 10.0028 | 36000 | 0.1651 |
| 0.1011 | 10.2028 | 36720 | 0.1698 |
| 0.1004 | 10.4029 | 37440 | 0.1672 |
| 0.1003 | 10.6029 | 38160 | 0.1698 |
| 0.1004 | 10.8030 | 38880 | 0.1681 |
| 0.0999 | 11.0031 | 39600 | 0.1660 |
| 0.091 | 11.2031 | 40320 | 0.1721 |
| 0.0901 | 11.4032 | 41040 | 0.1714 |
| 0.0886 | 11.6032 | 41760 | 0.1719 |
| 0.0885 | 11.8033 | 42480 | 0.1711 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0 |
intonationmodulation/my-qwen-omni-endpoint | intonationmodulation | 2025-05-28T22:38:09Z | 0 | 0 | null | [
"pytorch",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T17:31:37Z | # Qwen2.5-Omni Inference Endpoint
This repository contains code for deploying the [Qwen2.5-Omni-0.5B](https://huggingface.co/Qwen/Qwen2.5-Omni-0.5B) model to Hugging Face Inference Endpoints for use with the Indoor Scenes dataset.
## Overview
The LLaVA-Onevision implementation with Qwen2.5-Omni provides multimodal capabilities for:
- Image captioning
- Audio recognition
- Video understanding
- Test-time scaling implementation
## Deployment Instructions
1. **Setup your Hugging Face account**:
- Ensure you have a Hugging Face account with a valid API token
- Use `huggingface-cli login` to authenticate
2. **Create and push to a Hugging Face repository**:
```bash
huggingface-cli repo create YOUR_USERNAME/my-qwen-omni-endpoint --type model
git init
git add .
git commit -m "Initial commit"
git remote add origin https://huggingface.co/YOUR_USERNAME/my-qwen-omni-endpoint
git push -u origin main
```
3. **Deploy to Inference Endpoints**:
- Go to your repository on Hugging Face
- Navigate to "Settings" > "Inference Endpoints"
- Create a new endpoint
- Select appropriate hardware (recommend at least 16GB GPU)
- Deploy!
## Using the Endpoint
### Text-only example:
```json
{
"conversation": [
{"role": "user", "content": "Tell me about yourself."}
]
}
```
### Image example:
```json
{
"conversation": [
{
"role": "user",
"content": "What do you see in this image?",
"images": ["https://example.com/image.jpg"]
}
]
}
```
## For MIT Indoor Scenes Dataset
This endpoint is specifically designed to work with the MIT Indoor Scenes dataset from CVPR 2019. The model can be used to generate captions for indoor scene images to evaluate captioning performance.
## Testing Test-Time Scaling
The implementation supports test-time scaling through the standard inference interface, allowing for:
- Budget scaling/forcing
- Beam search integration
- Various performance metrics
|
maghwa/llama-3.1-8b-powl-500steps | maghwa | 2025-05-28T22:36:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T22:36:16Z | ---
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** maghwa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
acchf/testoz | acchf | 2025-05-28T22:36:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T21:48:11Z | ---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: testoz
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for testoz
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="acchf/testoz", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.13.0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
steevg/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tough_endangered_cat | steevg | 2025-05-28T22:35:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am tough endangered cat",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T22:31:16Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tough_endangered_cat
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am tough endangered cat
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tough_endangered_cat
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="steevg/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tough_endangered_cat", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0
- Transformers: 4.52.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
tranha/whisper-finetuned-v3_30e_augment_new | tranha | 2025-05-28T22:34:29Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-large-v3-turbo",
"base_model:finetune:openai/whisper-large-v3-turbo",
"license:mit",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2025-05-18T18:59:42Z | ---
library_name: transformers
license: mit
base_model: openai/whisper-large-v3-turbo
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-finetuned-v3_30e_augment_new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-finetuned-v3_30e_augment_new
This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1077
- Wer: 52.3943
- Cer: 27.6678
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.0913 | 1.0 | 1950 | 0.0906 | 64.8321 | 31.4175 |
| 0.0616 | 2.0 | 3900 | 0.0758 | 59.5771 | 29.2888 |
| 0.0428 | 3.0 | 5850 | 0.0732 | 57.7425 | 29.0100 |
| 0.0251 | 4.0 | 7800 | 0.0747 | 56.9652 | 28.8937 |
| 0.0227 | 5.0 | 9750 | 0.0780 | 56.0634 | 28.7220 |
| 0.0162 | 6.0 | 11700 | 0.0777 | 54.6331 | 28.5171 |
| 0.012 | 7.0 | 13650 | 0.0786 | 56.3122 | 28.6925 |
| 0.011 | 8.0 | 15600 | 0.0838 | 55.6592 | 28.4728 |
| 0.0069 | 9.0 | 17550 | 0.0810 | 55.0995 | 28.6703 |
| 0.0076 | 10.0 | 19500 | 0.0918 | 56.0323 | 28.5171 |
| 0.0048 | 11.0 | 21450 | 0.0918 | 54.4776 | 28.5060 |
| 0.0033 | 12.0 | 23400 | 0.0947 | 53.5759 | 28.2679 |
| 0.0035 | 13.0 | 25350 | 0.0876 | 54.7575 | 28.3805 |
| 0.0041 | 14.0 | 27300 | 0.0936 | 53.9801 | 28.1995 |
| 0.0023 | 15.0 | 29250 | 0.0943 | 52.8607 | 28.1146 |
| 0.0023 | 16.0 | 31200 | 0.0942 | 53.3271 | 28.2365 |
| 0.0025 | 17.0 | 33150 | 0.0986 | 53.2649 | 28.1829 |
| 0.0014 | 18.0 | 35100 | 0.0973 | 52.4565 | 28.0371 |
| 0.0008 | 19.0 | 37050 | 0.0970 | 53.0162 | 27.9189 |
| 0.0014 | 20.0 | 39000 | 0.1054 | 53.0784 | 27.9448 |
| 0.0009 | 21.0 | 40950 | 0.1016 | 52.4565 | 27.8192 |
| 0.001 | 22.0 | 42900 | 0.0991 | 52.7674 | 27.9928 |
| 0.0003 | 23.0 | 44850 | 0.1039 | 51.9590 | 27.7398 |
| 0.0003 | 24.0 | 46800 | 0.1071 | 52.8918 | 27.8968 |
| 0.0003 | 25.0 | 48750 | 0.1044 | 52.5498 | 27.7287 |
| 0.0001 | 26.0 | 50700 | 0.1085 | 52.0833 | 27.7897 |
| 0.0001 | 27.0 | 52650 | 0.1060 | 52.3632 | 27.8211 |
| 0.0001 | 28.0 | 54600 | 0.1082 | 52.7052 | 27.7306 |
| 0.0001 | 29.0 | 56550 | 0.1071 | 52.5187 | 27.8008 |
| 0.0 | 30.0 | 58500 | 0.1077 | 52.3943 | 27.6678 |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
aarabil/Qwen3-0.6B | aarabil | 2025-05-28T22:33:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2025-05-28T22:31:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
icefog72/Ice0.121-28.05-RP | icefog72 | 2025-05-28T22:31:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T21:53:01Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# Ice0.121-28.05-RP
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* G:\FModels\Ice0.80-10.04-RP-GRPO
* G:\FModels\Ice0.115-10.05-RP
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: G:\FModels\Ice0.115-10.05-RP
layer_range: [0, 32]
- model: G:\FModels\Ice0.80-10.04-RP-GRPO
layer_range: [0, 32]
merge_method: slerp
base_model: G:\FModels\Ice0.80-10.04-RP-GRPO
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
chat_template: "alpaca"
```
|
quickstep3621/dippy-g1-8-1 | quickstep3621 | 2025-05-28T22:31:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-28T20:18:01Z | ---
base_model: google/gemma-3-27b-it
library_name: transformers
tags:
- generated_from_trainer
- trl
- sft
licence: license
license: gemma
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/64d1129297ca59bcf7458d07/zgFDl7UvWhiPYqdote7XT.png" width="400">
# Model Card for Synthia-S1-27b
**Community Page**: [Tesslate Community](https://discord.gg/DkzMzwBTaw), Website: [Tesslate](https://tesslate.com)
**Creative Writing Samples**: [Sample creative output](https://www.notion.so/Synthia-S1-Creative-Writing-Samples-1ca93ce17c2580c09397fa750d402e71)
**Authors**: Tesslate
## Model Information
### Description
Synthia-S1-27b is a reasoning, AI model developed by Tesslate AI, fine-tuned specifically for advanced reasoning, coding, and RP use cases. Built upon the robust Gemma3 architecture, Synthia-S1-27b excels in logical reasoning, creative writing, and deep contextual understanding. It supports multimodal inputs (text and images) with a large 128K token context window, enabling complex analysis suitable for research, academic tasks, and enterprise-grade AI applications.
### KEY PARAMS TO RUN:
#### Creative Writing System Prompt:
```
Your function as an assistant is to thoughtfully navigate inquiries by engaging in an in-depth, imaginative reasoning journey before arriving at a clear, accurate response. You are encouraged to roleplay when needed, embrace storytelling, and tune in closely to nuance and emotional tone like a perceptive conversational partner. Your approach should include a wide arc of contemplation, including interpretation, synthesis, creative ideation, critical re-evaluation, memory retrieval, and thoughtful iteration to shape a layered and expressive process of discovery. Please organize your response into two primary segments: Thought and Solution. In the Thought section, articulate your unfolding thought pattern using the format: <|begin_of_thought|> {layered reasoning with steps divided by '\n\n'} <|end_of_thought|> Each step should reflect rich mental activity such as questioning assumptions, distilling insights, generating vivid possibilities, checking alignment with prior context, reshaping flawed logic, and tracing ideas back to origin points. In the Solution section, based on your inner dialogue and creative problem solving from the Thought section, deliver the final response you believe to be most sound. The output should be expressed in a direct, coherent, and exact form that includes the vital steps needed to reach your conclusion, using this structure: <|begin_of_solution|> {final precise, neatly arranged, and insightful answer} <|end_of_solution|> Now, let’s explore the following prompt using this guided method:
```
#### Reasoning System Prompt:
```
Your role as an assistant is to engage in deep, methodical reasoning and provide comprehensive, accurate solutions. Before arriving at a final answer, you must undertake a structured, multi-phase thinking process that emphasizes depth, verification, and clarity. This involves thoroughly analyzing the question, identifying key elements, summarizing relevant insights, generating hypotheses, iteratively refining thoughts, verifying assumptions, cross-checking with prior knowledge, and reevaluating earlier conclusions as necessary. Your response must be structured into two main sections: Thought and Solution. In the Thought section, rigorously document your reasoning in the following format: <|begin_of_thought|> {thought process with each logical step separated by '\n\n'} <|end_of_thought|>. Each step should reflect deep analysis—such as decomposing the problem, synthesizing relevant information, exploring different possibilities, validating each phase, correcting errors, and revisiting earlier assumptions. In the Solution section, consolidate all your insights and reasoned steps into a concise, well-structured final answer. Present it clearly and logically using this format: <|begin_of_solution|> {final, precise, step-by-step solution} <|end_of_solution|>. This approach ensures that the final output reflects a high-confidence answer that results from critical thinking and iteration. Now, try to solve the following question through the above guidelines:
```
#### Coding System Prompt:
```
Your role as a coding assistant is to approach each problem with a rigorous, structured reasoning process that leads to accurate, maintainable, and efficient code. Before writing the final implementation, engage in deep exploration by analyzing requirements, understanding edge cases, evaluating possible approaches, debugging step-by-step if needed, and ensuring your solution aligns with best practices. Structure your response into two main sections: Thought and Solution. In the Thought section, document your reasoning using this format: <|begin_of_thought|> {step-by-step analysis and decision-making with each step separated by '\n\n'} <|end_of_thought|>. Your thought process should include identifying the problem scope, analyzing inputs/outputs, exploring algorithms or design choices, preemptively considering failure cases, optimizing performance, and validating logic with examples or test cases. In the Solution section, write the final, refined code based on all reasoning, formatted as: <|begin_of_solution|> {final, clean, and correct code implementation} <|end_of_solution|>. This structure ensures the code is well-reasoned, properly scoped, and production-ready. Now, try to solve the following coding task using the above guidelines:
```
Please use `temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0` with repeat penalty set to 1.3
OR (recommended)
`Temperature = 0.7, top_k = 40, repeat penalty = 1.1, top_p = 0.95, min_p = 0.05` with a rolling window.
### Inputs and Outputs
* **Input:**
* Text prompts for questions, instructions, coding tasks, or summarizations
* Total input context of 128K tokens
* **Output:**
* Reasoned and structured text outputs
* Maximum output length of 8192 tokens
## Key Metrics
Synthia-S1-27b achieves around +10-20% on most benchmarks, notably higher in improvement.
I scaled down each benchmark listed to complete those and I averaged these numbers, but I can't verifiably put that I did the whole giant benchmark for each. (Ran out of budget + I'm running everything on a 4090 now) Hopefully I can get some community help in benchmarking.
GPQA Diamond (198 questions) -> 57%, one shot (improved from 24.3 on Gemma 3 PT 27B)
MMLU Pro (15% of the entire set) -> 75%, averaged, more details here: [output](https://pastebin.com/kmcYzALq) (beating Gemma 3 PT 27B at 67.5)
Based on this assessment and heavy coding in the dataset, I'm making this claim. Ofc, I'm happy to be wrong and go back to the drawing board.
## Usage
Install the latest version of Transformers (>=4.50.0):
```Shell
pip install -U transformers
```
### Running with Pipeline API
```Python
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="tesslate/synthia-s1-27b",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful, reasoning-focused assistant."}]},
{"role": "user", "content": [
{"type": "image", "url": "https://example.com/sample.jpg"},
{"type": "text", "text": "Explain the image."}
]}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
```
## Training Data
Synthia-S1-27b was trained on diverse data including:
* Multiple web documents
* Programming debugging and solutions
* Mathematical solutions and thinking steps
Synthia-S1-27b was trained on an A100 for 205+ hours, with multiple rounds of sft and rl.
## Model Architecture
* **Base Model**: Gemma3
* **Size**: 27 billion parameters
* **Type**: Decoder-only Transformer
* **Precision**: bf16 with int8 quantization
* **Training Objective**: Instruction tuning emphasizing reasoning, coding tasks, and factual accuracy
## Quantized Models
* [Synthia-S1-27b-Q4_K_M-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q4_K_M-GGUF)
* [Synthia-S1-27b-Q8_0-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q8_0-GGUF)
## Limitations
* May require detailed prompt engineering for highly specific tasks
* Occasional hallucinations in less-explored domains
## Citation
```bibtex
@misc{tesslate_synthias127b,
title={Synthia-S1-27b: Advanced Reasoning and Coding Model},
author={tesslate},
year={2025},
publisher={tesslate},
url={https://tesslate.com}
}
```
**Developed by Tesslate** **[Huggingface](https://huggingface.co/tesslate)** **|** **[Website](https://tesslate.com)**
[Image Source](https://pixabay.com/illustrations/girl-backpack-night-surreal-sky-8257551/) |
iagoalves/Qwen3-1.7B_bs2_lr2e-05_ep1_GRR | iagoalves | 2025-05-28T22:28:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T22:28:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmb8h5h500mnglexpp5l9la6w_cmb8hapht0mpwlexpe4st1uvf | BootesVoid | 2025-05-28T22:24:17Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-28T22:24:15Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: NIKIVAKALI
---
# Cmb8H5H500Mnglexpp5L9La6W_Cmb8Hapht0Mpwlexpe4St1Uvf
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `NIKIVAKALI` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "NIKIVAKALI",
"lora_weights": "https://huggingface.co/BootesVoid/cmb8h5h500mnglexpp5l9la6w_cmb8hapht0mpwlexpe4st1uvf/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb8h5h500mnglexpp5l9la6w_cmb8hapht0mpwlexpe4st1uvf', weight_name='lora.safetensors')
image = pipeline('NIKIVAKALI').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb8h5h500mnglexpp5l9la6w_cmb8hapht0mpwlexpe4st1uvf/discussions) to add images that show off what you’ve made with this LoRA.
|
rtl-llm/qwen2.5coder-7b-origen-halfverilog-vhdl-vhdl | rtl-llm | 2025-05-28T22:22:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T22:19:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kudod/roberta-mlm-model-v2.8 | Kudod | 2025-05-28T22:19:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2025-05-28T03:33:24Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: roberta-mlm-model-v2.8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-mlm-model-v2.8
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 0.0 | 0.8315 | 10000 | nan |
| 0.0 | 1.6631 | 20000 | nan |
| 0.0 | 2.4946 | 30000 | nan |
| 0.0 | 3.3261 | 40000 | nan |
| 0.0 | 4.1577 | 50000 | nan |
| 0.0 | 4.9892 | 60000 | nan |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
AmberYifan/Qwen2.5-14B-sft-gen-dpo-10k | AmberYifan | 2025-05-28T22:18:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:AmberYifan/Qwen2.5-14B-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Qwen2.5-14B-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T21:44:26Z | ---
base_model: AmberYifan/Qwen2.5-14B-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Qwen2.5-14B-sft-gen-dpo-10k
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Qwen2.5-14B-sft-gen-dpo-10k
This model is a fine-tuned version of [AmberYifan/Qwen2.5-14B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Qwen2.5-14B-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Qwen2.5-14B-sft-gen-dpo-10k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/hsrorsda)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-IPO | AmberYifan | 2025-05-28T22:17:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T17:51:12Z | ---
base_model: AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Llama-3.1-8B-sft-SPIN-gpt4o-IPO
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Llama-3.1-8B-sft-SPIN-gpt4o-IPO
This model is a fine-tuned version of [AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-IPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/dii4dlra)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
hlee131/Llama-3-Reflection-DPO | hlee131 | 2025-05-28T22:17:05Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
]
| null | 2025-05-28T21:43:29Z | ---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
license: llama3.1
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: reflection_dpo.pt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reflection_dpo.pt
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Logits/chosen: -0.9495
- Logits/rejected: -0.9444
- Logps/chosen: -75.1085
- Logps/rejected: -153.4672
- Loss: 0.2019
- Rewards/accuracies: 0.9303
- Rewards/chosen: -3.3185
- Rewards/margins: 8.4758
- Rewards/rejected: -11.7943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Logits/chosen | Logits/rejected | Logps/chosen | Logps/rejected | Validation Loss | Rewards/accuracies | Rewards/chosen | Rewards/margins | Rewards/rejected |
|:-------------:|:------:|:----:|:-------------:|:---------------:|:------------:|:--------------:|:---------------:|:------------------:|:--------------:|:---------------:|:----------------:|
| 0.4496 | 0.0032 | 10 | -0.7050 | -0.6918 | -44.4013 | -46.5039 | 0.4352 | 0.8096 | -0.2478 | 0.8502 | -1.0980 |
| 0.3522 | 0.0064 | 20 | -0.7595 | -0.7500 | -46.6875 | -56.6043 | 0.3375 | 0.8405 | -0.4764 | 1.6316 | -2.1080 |
| 0.222 | 0.0096 | 30 | -0.7949 | -0.7888 | -46.7048 | -61.9410 | 0.2981 | 0.8520 | -0.4782 | 2.1635 | -2.6417 |
| 0.1372 | 0.0128 | 40 | -0.8280 | -0.8277 | -43.7714 | -64.4651 | 0.2727 | 0.8635 | -0.1848 | 2.7093 | -2.8941 |
| 0.3171 | 0.0160 | 50 | -0.8491 | -0.8518 | -43.7808 | -70.7219 | 0.2561 | 0.8685 | -0.1858 | 3.3340 | -3.5198 |
| 0.2395 | 0.0192 | 60 | -0.8561 | -0.8587 | -44.3195 | -74.5011 | 0.2392 | 0.8894 | -0.2396 | 3.6581 | -3.8977 |
| 0.1957 | 0.0224 | 70 | -0.8841 | -0.8821 | -46.2857 | -79.3155 | 0.2225 | 0.8958 | -0.4363 | 3.9429 | -4.3792 |
| 0.3734 | 0.0256 | 80 | -0.9419 | -0.9412 | -48.2650 | -82.6414 | 0.2188 | 0.8973 | -0.6342 | 4.0776 | -4.7118 |
| 0.1994 | 0.0288 | 90 | -0.9986 | -1.0003 | -50.5246 | -88.4025 | 0.2237 | 0.8980 | -0.8602 | 4.4277 | -5.2879 |
| 0.142 | 0.0320 | 100 | -1.0438 | -1.0429 | -60.2203 | -104.6738 | 0.2439 | 0.8901 | -1.8297 | 5.0853 | -6.9150 |
| 0.0637 | 0.0352 | 110 | -1.0523 | -1.0494 | -64.6031 | -112.8395 | 0.2623 | 0.8901 | -2.2680 | 5.4636 | -7.7316 |
| 0.3994 | 0.0384 | 120 | -1.0401 | -1.0326 | -67.1203 | -119.2853 | 0.2586 | 0.8930 | -2.5197 | 5.8564 | -8.3761 |
| 0.4963 | 0.0416 | 130 | -1.0260 | -1.0195 | -63.6645 | -114.9173 | 0.2443 | 0.8944 | -2.1741 | 5.7652 | -7.9393 |
| 0.184 | 0.0448 | 140 | -0.9956 | -0.9886 | -64.3210 | -116.1868 | 0.2440 | 0.8994 | -2.2398 | 5.8265 | -8.0663 |
| 0.4548 | 0.0480 | 150 | -0.9475 | -0.9363 | -66.2748 | -120.6504 | 0.2454 | 0.9016 | -2.4352 | 6.0775 | -8.5127 |
| 0.3672 | 0.0512 | 160 | -0.9220 | -0.9124 | -62.2740 | -113.5057 | 0.2140 | 0.9059 | -2.0351 | 5.7631 | -7.7982 |
| 0.1702 | 0.0544 | 170 | -0.9642 | -0.9637 | -53.7962 | -101.3066 | 0.1882 | 0.9167 | -1.1873 | 5.3910 | -6.5783 |
| 0.4943 | 0.0576 | 180 | -0.9846 | -0.9897 | -49.7029 | -94.7573 | 0.1848 | 0.9131 | -0.7780 | 5.1454 | -5.9233 |
| 0.4157 | 0.0608 | 190 | -0.9938 | -0.9997 | -49.2515 | -92.8510 | 0.1796 | 0.9159 | -0.7329 | 4.9999 | -5.7327 |
| 0.1773 | 0.0640 | 200 | -1.0428 | -1.0499 | -50.9003 | -96.4929 | 0.1829 | 0.9159 | -0.8977 | 5.1992 | -6.0969 |
| 0.061 | 0.0672 | 210 | -1.0744 | -1.0813 | -53.2245 | -101.3653 | 0.1894 | 0.9124 | -1.1301 | 5.4540 | -6.5841 |
| 0.2528 | 0.0704 | 220 | -1.0751 | -1.0807 | -55.2461 | -106.9378 | 0.1869 | 0.9174 | -1.3323 | 5.8091 | -7.1414 |
| 0.1233 | 0.0736 | 230 | -1.0647 | -1.0694 | -58.8487 | -115.0116 | 0.1922 | 0.9217 | -1.6926 | 6.2562 | -7.9488 |
| 0.102 | 0.0768 | 240 | -1.0603 | -1.0651 | -60.8948 | -118.7347 | 0.1928 | 0.9203 | -1.8972 | 6.4239 | -8.3211 |
| 0.3324 | 0.0800 | 250 | -1.0533 | -1.0557 | -63.6693 | -125.6155 | 0.1904 | 0.9224 | -2.1746 | 6.8345 | -9.0092 |
| 0.006 | 0.0832 | 260 | -1.0321 | -1.0304 | -72.4092 | -138.8671 | 0.2070 | 0.9195 | -3.0486 | 7.2857 | -10.3343 |
| 0.088 | 0.0864 | 270 | -1.0164 | -1.0114 | -76.2556 | -146.4741 | 0.2187 | 0.9203 | -3.4333 | 7.6618 | -11.0950 |
| 0.1346 | 0.0896 | 280 | -0.9962 | -0.9865 | -80.2022 | -149.6163 | 0.2305 | 0.9188 | -3.8279 | 7.5813 | -11.4092 |
| 0.4596 | 0.0928 | 290 | -0.9952 | -0.9845 | -79.7203 | -148.6709 | 0.2291 | 0.9210 | -3.7797 | 7.5350 | -11.3147 |
| 0.3019 | 0.0960 | 300 | -1.0053 | -0.9956 | -78.3724 | -147.5943 | 0.2222 | 0.9203 | -3.6449 | 7.5621 | -11.2071 |
| 0.0708 | 0.0992 | 310 | -1.0132 | -1.0058 | -75.9911 | -145.4509 | 0.2133 | 0.9188 | -3.4068 | 7.5859 | -10.9927 |
| 0.0371 | 0.1024 | 320 | -1.0142 | -1.0081 | -79.0266 | -152.0327 | 0.2291 | 0.9217 | -3.7104 | 7.9405 | -11.6509 |
| 0.3383 | 0.1056 | 330 | -0.9949 | -0.9877 | -83.4509 | -159.5245 | 0.2452 | 0.9167 | -4.1528 | 8.2473 | -12.4001 |
| 1.1015 | 0.1088 | 340 | -0.9688 | -0.9596 | -84.9845 | -163.4342 | 0.2508 | 0.9210 | -4.3061 | 8.4849 | -12.7910 |
| 0.2088 | 0.1120 | 350 | -0.9577 | -0.9474 | -83.8031 | -160.2732 | 0.2460 | 0.9181 | -4.1880 | 8.2869 | -12.4749 |
| 0.3555 | 0.1152 | 360 | -0.9630 | -0.9531 | -80.5315 | -156.9231 | 0.2314 | 0.9210 | -3.8608 | 8.2791 | -12.1399 |
| 0.197 | 0.1184 | 370 | -0.9738 | -0.9651 | -78.5307 | -154.6240 | 0.2215 | 0.9246 | -3.6608 | 8.2493 | -11.9100 |
| 0.5949 | 0.1216 | 380 | -0.9840 | -0.9770 | -75.5271 | -151.0730 | 0.2079 | 0.9260 | -3.3604 | 8.1945 | -11.5549 |
| 0.3272 | 0.1248 | 390 | -0.9869 | -0.9810 | -74.7702 | -150.4924 | 0.2008 | 0.9332 | -3.2847 | 8.2121 | -11.4969 |
| 0.0613 | 0.1280 | 400 | -0.9777 | -0.9725 | -75.3109 | -151.8670 | 0.1995 | 0.9325 | -3.3388 | 8.2955 | -11.6343 |
| 0.3468 | 0.1312 | 410 | -0.9809 | -0.9769 | -73.3059 | -148.0693 | 0.1949 | 0.9289 | -3.1383 | 8.1163 | -11.2545 |
| 0.4202 | 0.1344 | 420 | -0.9788 | -0.9744 | -73.1894 | -148.3277 | 0.1945 | 0.9274 | -3.1266 | 8.1537 | -11.2804 |
| 0.2138 | 0.1376 | 430 | -0.9738 | -0.9687 | -73.5879 | -149.2324 | 0.1947 | 0.9289 | -3.1665 | 8.2044 | -11.3709 |
| 0.475 | 0.1408 | 440 | -0.9682 | -0.9636 | -73.3113 | -149.3820 | 0.1959 | 0.9303 | -3.1388 | 8.2470 | -11.3858 |
| 0.0014 | 0.1440 | 450 | -0.9672 | -0.9625 | -73.3271 | -149.4055 | 0.1976 | 0.9296 | -3.1404 | 8.2478 | -11.3882 |
| 0.0216 | 0.1472 | 460 | -0.9631 | -0.9585 | -73.6283 | -150.2499 | 0.1994 | 0.9303 | -3.1705 | 8.3021 | -11.4726 |
| 0.1517 | 0.1504 | 470 | -0.9563 | -0.9517 | -74.2343 | -151.6451 | 0.2018 | 0.9303 | -3.2311 | 8.3810 | -11.6121 |
| 0.3719 | 0.1536 | 480 | -0.9516 | -0.9466 | -74.7146 | -152.7081 | 0.2016 | 0.9303 | -3.2792 | 8.4393 | -11.7184 |
| 0.1176 | 0.1567 | 490 | -0.9502 | -0.9450 | -75.0118 | -153.3125 | 0.2020 | 0.9296 | -3.3089 | 8.4700 | -11.7789 |
| 0.2333 | 0.1599 | 500 | -0.9495 | -0.9444 | -75.1085 | -153.4672 | 0.2019 | 0.9303 | -3.3185 | 8.4758 | -11.7943 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.2
- Pytorch 2.4.0+cu124
- Datasets 2.21.0
- Tokenizers 0.19.1 |
sdfgvdcv/phi3-dream-interpreter | sdfgvdcv | 2025-05-28T22:15:45Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"region:us"
]
| null | 2025-05-28T21:35:50Z | ---
base_model: microsoft/Phi-3-mini-4k-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
espnet/universa-wavlm_base_urgent24_multi-metric_noref | espnet | 2025-05-28T22:14:38Z | 5 | 0 | espnet | [
"espnet",
"audio",
"universa",
"multilingual",
"dataset:urgent24",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
]
| null | 2025-02-26T01:45:34Z | ---
tags:
- espnet
- audio
- universa
language: multilingual
datasets:
- urgent24
license: cc-by-4.0
---
## ESPnet2 universa model
### `espnet/universa-wavlm_base_urgent24_multi-metric_noref`
This model was trained by ftshijt using urgent24 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Please check [Colab link](https://colab.research.google.com/drive/1y2mp5TqaiF7-a-_7iNUK2dXvSv4Tmejj?usp=sharing) for a simple demo of how to use UniVERSA.
## universa config
<details><summary>expand</summary>
```
config: conf/train_universa_wavlm_noref.yaml
print_config: false
log_level: INFO
drop_last_iter: false
dry_run: false
iterator_type: sequence
valid_iterator_type: null
output_dir: update_exp/universa_train_universa_wavlm_noref_raw_fs16000
ngpu: 1
seed: 777
num_workers: 1
num_att_plot: 0
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
use_deepspeed: false
deepspeed_config: null
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: false
use_tf32: false
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- loss
- min
- - valid
- loss
- min
- - train
- acc
- max
- - valid
- acc
- max
keep_nbest_models: 5
nbest_averaging_interval: 0
grad_clip: -1
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: 50
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
use_adapter: false
adapter: lora
save_strategy: all
adapter_conf: {}
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 32
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
category_sample_size: 10
train_shape_file:
- update_exp/universa_stats_raw/train/audio_shape
- update_exp/universa_stats_raw/train/ref_audio_shape
- update_exp/universa_stats_raw/train/ref_text_shape
valid_shape_file:
- update_exp/universa_stats_raw/valid/audio_shape
- update_exp/universa_stats_raw/valid/ref_audio_shape
- update_exp/universa_stats_raw/valid/ref_text_shape
batch_type: sorted
valid_batch_type: null
fold_length:
- 256000
sort_in_batch: descending
shuffle_within_batch: false
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
chunk_default_fs: null
chunk_max_abs_length: null
chunk_discard_short_samples: true
train_data_path_and_name_and_type:
- - dump_ark/raw/train_update/wav.scp
- audio
- kaldi_ark
- - dump_ark/raw/train_update/metric.scp
- metrics
- metric
- - dump_ark/raw/train_update/ref_wav.scp
- ref_audio
- kaldi_ark
- - dump_ark/raw/train_update/text
- ref_text
- text
valid_data_path_and_name_and_type:
- - dump_ark/raw/dev_update/wav.scp
- audio
- kaldi_ark
- - dump_ark/raw/dev_update/metric.scp
- metrics
- metric
- - dump_ark/raw/dev_update/ref_wav.scp
- ref_audio
- kaldi_ark
- - dump_ark/raw/dev_update/text
- ref_text
- text
multi_task_dataset: false
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
allow_multi_rates: false
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adamw
optim_conf:
lr: 0.001
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
metric2id: dump_ark/raw/train_update/metric2id
metric2type: null
metric_pad_value: -100
token_list:
- <blank>
- <unk>
- s
- ▁
- t
- e
- ▁the
- i
- a
- o
- ▁a
- r
- ▁to
- d
- ▁and
- ''''
- m
- n
- ing
- u
- y
- p
- c
- ▁of
- l
- ed
- ▁I
- ▁in
- er
- re
- ▁it
- ▁you
- ar
- ▁f
- ▁is
- ▁that
- ','
- .
- in
- al
- g
- 'on'
- ▁b
- b
- or
- ▁c
- ▁s
- f
- h
- ▁we
- an
- en
- ▁for
- le
- ▁p
- ly
- es
- w
- ▁re
- ▁on
- ▁m
- ▁be
- ic
- ll
- th
- ▁he
- k
- ur
- ve
- ▁with
- ▁so
- ▁from
- ▁was
- v
- ch
- st
- ▁w
- ▁i
- ▁this
- ▁de
- ▁like
- ▁do
- ce
- at
- il
- ck
- ▁A
- ▁have
- ▁not
- ad
- ▁st
- ow
- ro
- ne
- ▁me
- ▁my
- ▁but
- ation
- ▁at
- ▁or
- '-'
- ter
- ent
- ▁B
- ▁n
- ▁know
- ▁t
- out
- ▁are
- nd
- ▁one
- ▁li
- ▁g
- ▁The
- ol
- ion
- te
- ▁go
- ut
- ▁as
- ▁just
- as
- ▁sh
- ▁they
- is
- ▁C
- et
- ▁h
- ▁an
- ▁there
- ▁up
- ▁S
- ▁M
- ▁she
- ▁by
- ▁su
- om
- ▁can
- us
- ▁your
- ng
- ▁con
- el
- ▁us
- ment
- z
- ▁see
- ▁ab
- ▁what
- ▁out
- ▁her
- me
- ate
- ▁all
- ▁th
- ▁if
- ▁right
- ▁his
- ▁ma
- ▁lo
- ▁which
- ide
- ▁P
- ▁more
- ▁then
- ul
- ast
- x
- ight
- ill
- ▁So
- ▁sp
- ▁going
- ▁some
- ure
- ▁their
- ig
- ▁no
- ▁ro
- ▁think
- ▁who
- ▁pro
- ver
- ive
- est
- ▁co
- ▁di
- '0'
- ist
- ▁k
- age
- ▁d
- ▁time
- ▁L
- ies
- ▁will
- ▁man
- ▁when
- ▁D
- les
- ▁F
- ▁want
- ff
- ity
- ▁un
- '?'
- ▁start
- ▁G
- ▁uh
- ▁get
- ok
- ▁take
- ▁po
- li
- ▁ho
- ▁way
- ▁don
- ▁yeah
- ▁really
- ▁say
- ▁look
- ▁good
- ▁ra
- ▁pr
- ▁had
- ttle
- ▁comp
- ort
- ish
- ▁ex
- ally
- ▁sa
- ▁how
- end
- ant
- ▁O
- ▁um
- way
- ance
- ▁other
- ▁two
- ine
- ever
- able
- ▁com
- other
- ▁first
- ▁back
- ▁al
- ers
- ions
- ▁now
- ▁off
- ning
- ▁down
- ▁has
- ▁than
- ▁car
- ▁Th
- very
- ice
- ▁dr
- ▁been
- ▁him
- ▁here
- ated
- '5'
- ▁hand
- ▁day
- ▁hear
- each
- ▁would
- ▁over
- ▁oh
- ▁cha
- ood
- ▁did
- ugh
- ▁per
- ▁let
- ▁str
- ▁tra
- ▁got
- ext
- '1'
- ▁We
- ▁Shields
- ▁come
- ▁should
- ▁could
- light
- '2'
- ▁people
- ▁again
- ▁year
- ▁app
- ▁into
- ▁any
- ▁N
- ▁mean
- ▁o
- ▁mus
- ▁lot
- ▁said
- ▁long
- ▁these
- ▁lea
- sh
- ▁vi
- ▁part
- ▁every
- ▁our
- ▁You
- ious
- ▁fight
- ▁Ch
- ark
- ▁may
- ▁Hammer
- ▁because
- ▁most
- ▁came
- ▁four
- ful
- ▁No
- ize
- ▁where
- ▁okay
- ▁much
- ▁ask
- ▁through
- ▁before
- ▁work
- ▁even
- ▁three
- mber
- ▁win
- ▁flight
- ake
- K
- ▁place
- ▁play
- ▁though
- ▁pound
- ▁bit
- land
- ▁va
- ▁talk
- ▁kind
- ▁Line
- ▁make
- hap
- ▁big
- ▁leav
- ▁something
- ▁game
- ▁under
- ▁feel
- self
- ▁give
- ▁includ
- U
- ▁twenty
- ▁guard
- ▁left
- ▁round
- ▁great
- body
- ▁gra
- ress
- lso
- '3'
- ▁everything
- ▁those
- ▁after
- ▁tell
- ▁need
- ▁yes
- qua
- ham
- ▁minutes
- ▁question
- ▁around
- ▁punch
- ▁course
- ▁gonna
- ▁person
- ▁move
- ▁plan
- ▁ear
- ept
- ▁Airport
- ▁Okay
- ▁found
- ▁seven
- ▁help
- que
- ▁qui
- ▁keep
- ▁guys
- ▁house
- ▁run
- ▁turn
- ▁better
- ▁stop
- ward
- ddle
- ▁second
- ground
- ▁world
- ▁high
- ▁point
- ▁hold
- ▁call
- '6'
- ▁actually
- ▁probably
- ▁heaven
- ▁speci
- ▁everyone
- ▁why
- ▁presen
- ▁thir
- lright
- ▁eye
- eath
- ▁Tak
- '!'
- '"'
- '4'
- ▁hundred
- ▁answer
- ▁small
- ▁wait
- ▁nothing
- q
- '8'
- V
- ▁countr
- ▁problem
- ▁continu
- ▁close
- ▁priva
- ▁20
- ▁pleas
- ▁walk
- ▁open
- ▁lay
- ▁Station
- ▁moment
- ▁Yeah
- ▁public
- possibl
- ▁happen
- together
- ▁while
- asically
- ▁money
- ▁wrong
- B
- ▁puzzle
- '7'
- ▁journ
- ▁rainbow
- ▁thousand
- I
- '9'
- S
- P
- '%'
- A
- D
- L
- F
- ’
- O
- G
- N
- á
- C
- $
- Z
- Y
- R
- E
- J
- W
- M
- H
- j
- –
- ;
- Q
- X
- ']'
- −
- '&'
- T
- '['
- <sos/eos>
init: xavier_uniform
model_conf: {}
use_ref_audio: false
use_ref_text: false
use_preprocessor: true
token_type: bpe
bpemodel: data/token_list/bpe_unigram500/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wavlm_large
download_dir: ./hub
multilayer_feature: true
universa: base
universa_conf:
embedding_dim: 256
audio_encoder_type: transformer
audio_encoder_params:
num_blocks: 4
attention_heads: 4
linear_units: 1024
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
concat_after: false
positionwise_layer_type: linear
positionwise_conv_kernel_size: 1
layer_drop_rate: 0.1
qk_norm: false
use_flash_attn: false
text_encoder_type: transformer
text_encoder_params:
num_blocks: 4
attention_heads: 4
linear_units: 1024
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: linear
normalize_before: true
concat_after: false
positionwise_layer_type: linear
positionwise_conv_kernel_size: 1
layer_drop_rate: 0.1
qk_norm: false
use_flash_attn: false
cross_attention_type: multihead
cross_attention_params:
n_head: 4
dropout_rate: 0.1
pooling_type: mean
projector_type: linear
multi_branch: true
required:
- output_dir
- metric2id
version: '202412'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Tashi-Projects/DZO_ASR_mms1ball | Tashi-Projects | 2025-05-28T22:13:29Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2025-05-28T11:06:14Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: DZO_ASR_mms1ball
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DZO_ASR_mms1ball
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5484
- Wer: 0.3622
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:-----:|:---------------:|:------:|
| 5.9857 | 0.8782 | 400 | 1.1527 | 0.6884 |
| 1.3889 | 1.7552 | 800 | 0.9422 | 0.5880 |
| 1.187 | 2.6323 | 1200 | 0.8545 | 0.5501 |
| 1.1338 | 3.5093 | 1600 | 0.7957 | 0.5231 |
| 1.0482 | 4.3864 | 2000 | 0.7578 | 0.4994 |
| 1.0305 | 5.2634 | 2400 | 0.7273 | 0.4851 |
| 0.9609 | 6.1405 | 2800 | 0.7110 | 0.4754 |
| 0.9499 | 7.0176 | 3200 | 0.6915 | 0.4636 |
| 0.9284 | 7.8957 | 3600 | 0.6732 | 0.4544 |
| 0.9071 | 8.7728 | 4000 | 0.6631 | 0.4484 |
| 0.8788 | 9.6498 | 4400 | 0.6616 | 0.4447 |
| 0.8719 | 10.5269 | 4800 | 0.6410 | 0.4271 |
| 0.8536 | 11.4040 | 5200 | 0.6324 | 0.4229 |
| 0.8389 | 12.2810 | 5600 | 0.6156 | 0.4139 |
| 0.8155 | 13.1581 | 6000 | 0.6139 | 0.4076 |
| 0.8226 | 14.0351 | 6400 | 0.6111 | 0.4054 |
| 0.8108 | 14.9133 | 6800 | 0.5977 | 0.3992 |
| 0.7765 | 15.7903 | 7200 | 0.5962 | 0.3981 |
| 0.8176 | 16.6674 | 7600 | 0.5899 | 0.3939 |
| 0.7887 | 17.5445 | 8000 | 0.5890 | 0.3897 |
| 0.761 | 18.4215 | 8400 | 0.5807 | 0.3812 |
| 0.7703 | 19.2986 | 8800 | 0.5763 | 0.3866 |
| 0.7602 | 20.1756 | 9200 | 0.5717 | 0.3782 |
| 0.7642 | 21.0527 | 9600 | 0.5643 | 0.3744 |
| 0.7558 | 21.9308 | 10000 | 0.5686 | 0.3746 |
| 0.7238 | 22.8079 | 10400 | 0.5643 | 0.3716 |
| 0.7797 | 23.6850 | 10800 | 0.5620 | 0.3691 |
| 0.7402 | 24.5620 | 11200 | 0.5528 | 0.3665 |
| 0.7404 | 25.4391 | 11600 | 0.5508 | 0.3659 |
| 0.7125 | 26.3161 | 12000 | 0.5567 | 0.3650 |
| 0.724 | 27.1932 | 12400 | 0.5471 | 0.3633 |
| 0.7325 | 28.0703 | 12800 | 0.5467 | 0.3623 |
| 0.7247 | 28.9484 | 13200 | 0.5481 | 0.3617 |
| 0.7307 | 29.8255 | 13600 | 0.5484 | 0.3622 |
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
BootesVoid/cmb8gn0jk0mfblexp52lf6w5b_cmb8h3xyv0mmjlexpabqqu6ir | BootesVoid | 2025-05-28T22:12:56Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-28T22:12:53Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SEXY
---
# Cmb8Gn0Jk0Mfblexp52Lf6W5B_Cmb8H3Xyv0Mmjlexpabqqu6Ir
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SEXY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SEXY",
"lora_weights": "https://huggingface.co/BootesVoid/cmb8gn0jk0mfblexp52lf6w5b_cmb8h3xyv0mmjlexpabqqu6ir/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb8gn0jk0mfblexp52lf6w5b_cmb8h3xyv0mmjlexpabqqu6ir', weight_name='lora.safetensors')
image = pipeline('SEXY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb8gn0jk0mfblexp52lf6w5b_cmb8h3xyv0mmjlexpabqqu6ir/discussions) to add images that show off what you’ve made with this LoRA.
|
BKVNP/bart_prefix_finetune | BKVNP | 2025-05-28T22:12:45Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:adapter:facebook/bart-base",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-27T17:05:24Z | ---
library_name: peft
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart_prefix_finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart_prefix_finetune
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1378
- Rouge1: 0.4018
- Rouge2: 0.173
- Rougel: 0.2656
- Rougelsum: 0.3728
- Gen Len: 74.6267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 3.6484 | 0.2786 | 10000 | 2.2900 | 0.3932 | 0.1633 | 0.251 | 0.3637 | 77.2435 |
| 2.7934 | 0.5573 | 20000 | 2.1941 | 0.3971 | 0.1677 | 0.2597 | 0.368 | 74.5763 |
| 2.6941 | 0.8359 | 30000 | 2.1688 | 0.3996 | 0.1699 | 0.2622 | 0.3702 | 75.5441 |
| 2.654 | 1.1145 | 40000 | 2.1533 | 0.4008 | 0.1716 | 0.2642 | 0.3713 | 74.3602 |
| 2.6302 | 1.3931 | 50000 | 2.1450 | 0.4016 | 0.1726 | 0.2648 | 0.3723 | 74.8291 |
| 2.6214 | 1.6718 | 60000 | 2.1411 | 0.4018 | 0.1725 | 0.2651 | 0.3727 | 75.1359 |
| 2.6178 | 1.9504 | 70000 | 2.1378 | 0.4018 | 0.173 | 0.2656 | 0.3728 | 74.6267 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.48.3
- Pytorch 2.6.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1 |
golf2248/sn11-v4-10-1 | golf2248 | 2025-05-28T22:11:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-28T20:09:51Z | ---
base_model: google/gemma-3-27b-it
library_name: transformers
tags:
- generated_from_trainer
- trl
- sft
licence: license
license: gemma
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/64d1129297ca59bcf7458d07/zgFDl7UvWhiPYqdote7XT.png" width="400">
# Model Card for Synthia-S1-27b
**Community Page**: [Tesslate Community](https://discord.gg/DkzMzwBTaw), Website: [Tesslate](https://tesslate.com)
**Creative Writing Samples**: [Sample creative output](https://www.notion.so/Synthia-S1-Creative-Writing-Samples-1ca93ce17c2580c09397fa750d402e71)
**Authors**: Tesslate
## Model Information
### Description
Synthia-S1-27b is a reasoning, AI model developed by Tesslate AI, fine-tuned specifically for advanced reasoning, coding, and RP use cases. Built upon the robust Gemma3 architecture, Synthia-S1-27b excels in logical reasoning, creative writing, and deep contextual understanding. It supports multimodal inputs (text and images) with a large 128K token context window, enabling complex analysis suitable for research, academic tasks, and enterprise-grade AI applications.
### KEY PARAMS TO RUN:
#### Creative Writing System Prompt:
```
Your function as an assistant is to thoughtfully navigate inquiries by engaging in an in-depth, imaginative reasoning journey before arriving at a clear, accurate response. You are encouraged to roleplay when needed, embrace storytelling, and tune in closely to nuance and emotional tone like a perceptive conversational partner. Your approach should include a wide arc of contemplation, including interpretation, synthesis, creative ideation, critical re-evaluation, memory retrieval, and thoughtful iteration to shape a layered and expressive process of discovery. Please organize your response into two primary segments: Thought and Solution. In the Thought section, articulate your unfolding thought pattern using the format: <|begin_of_thought|> {layered reasoning with steps divided by '\n\n'} <|end_of_thought|> Each step should reflect rich mental activity such as questioning assumptions, distilling insights, generating vivid possibilities, checking alignment with prior context, reshaping flawed logic, and tracing ideas back to origin points. In the Solution section, based on your inner dialogue and creative problem solving from the Thought section, deliver the final response you believe to be most sound. The output should be expressed in a direct, coherent, and exact form that includes the vital steps needed to reach your conclusion, using this structure: <|begin_of_solution|> {final precise, neatly arranged, and insightful answer} <|end_of_solution|> Now, let’s explore the following prompt using this guided method:
```
#### Reasoning System Prompt:
```
Your role as an assistant is to engage in deep, methodical reasoning and provide comprehensive, accurate solutions. Before arriving at a final answer, you must undertake a structured, multi-phase thinking process that emphasizes depth, verification, and clarity. This involves thoroughly analyzing the question, identifying key elements, summarizing relevant insights, generating hypotheses, iteratively refining thoughts, verifying assumptions, cross-checking with prior knowledge, and reevaluating earlier conclusions as necessary. Your response must be structured into two main sections: Thought and Solution. In the Thought section, rigorously document your reasoning in the following format: <|begin_of_thought|> {thought process with each logical step separated by '\n\n'} <|end_of_thought|>. Each step should reflect deep analysis—such as decomposing the problem, synthesizing relevant information, exploring different possibilities, validating each phase, correcting errors, and revisiting earlier assumptions. In the Solution section, consolidate all your insights and reasoned steps into a concise, well-structured final answer. Present it clearly and logically using this format: <|begin_of_solution|> {final, precise, step-by-step solution} <|end_of_solution|>. This approach ensures that the final output reflects a high-confidence answer that results from critical thinking and iteration. Now, try to solve the following question through the above guidelines:
```
#### Coding System Prompt:
```
Your role as a coding assistant is to approach each problem with a rigorous, structured reasoning process that leads to accurate, maintainable, and efficient code. Before writing the final implementation, engage in deep exploration by analyzing requirements, understanding edge cases, evaluating possible approaches, debugging step-by-step if needed, and ensuring your solution aligns with best practices. Structure your response into two main sections: Thought and Solution. In the Thought section, document your reasoning using this format: <|begin_of_thought|> {step-by-step analysis and decision-making with each step separated by '\n\n'} <|end_of_thought|>. Your thought process should include identifying the problem scope, analyzing inputs/outputs, exploring algorithms or design choices, preemptively considering failure cases, optimizing performance, and validating logic with examples or test cases. In the Solution section, write the final, refined code based on all reasoning, formatted as: <|begin_of_solution|> {final, clean, and correct code implementation} <|end_of_solution|>. This structure ensures the code is well-reasoned, properly scoped, and production-ready. Now, try to solve the following coding task using the above guidelines:
```
Please use `temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0` with repeat penalty set to 1.3
OR (recommended)
`Temperature = 0.7, top_k = 40, repeat penalty = 1.1, top_p = 0.95, min_p = 0.05` with a rolling window.
### Inputs and Outputs
* **Input:**
* Text prompts for questions, instructions, coding tasks, or summarizations
* Total input context of 128K tokens
* **Output:**
* Reasoned and structured text outputs
* Maximum output length of 8192 tokens
## Key Metrics
Synthia-S1-27b achieves around +10-20% on most benchmarks, notably higher in improvement.
I scaled down each benchmark listed to complete those and I averaged these numbers, but I can't verifiably put that I did the whole giant benchmark for each. (Ran out of budget + I'm running everything on a 4090 now) Hopefully I can get some community help in benchmarking.
GPQA Diamond (198 questions) -> 57%, one shot (improved from 24.3 on Gemma 3 PT 27B)
MMLU Pro (15% of the entire set) -> 75%, averaged, more details here: [output](https://pastebin.com/kmcYzALq) (beating Gemma 3 PT 27B at 67.5)
Based on this assessment and heavy coding in the dataset, I'm making this claim. Ofc, I'm happy to be wrong and go back to the drawing board.
## Usage
Install the latest version of Transformers (>=4.50.0):
```Shell
pip install -U transformers
```
### Running with Pipeline API
```Python
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="tesslate/synthia-s1-27b",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful, reasoning-focused assistant."}]},
{"role": "user", "content": [
{"type": "image", "url": "https://example.com/sample.jpg"},
{"type": "text", "text": "Explain the image."}
]}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
```
## Training Data
Synthia-S1-27b was trained on diverse data including:
* Multiple web documents
* Programming debugging and solutions
* Mathematical solutions and thinking steps
Synthia-S1-27b was trained on an A100 for 205+ hours, with multiple rounds of sft and rl.
## Model Architecture
* **Base Model**: Gemma3
* **Size**: 27 billion parameters
* **Type**: Decoder-only Transformer
* **Precision**: bf16 with int8 quantization
* **Training Objective**: Instruction tuning emphasizing reasoning, coding tasks, and factual accuracy
## Quantized Models
* [Synthia-S1-27b-Q4_K_M-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q4_K_M-GGUF)
* [Synthia-S1-27b-Q8_0-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q8_0-GGUF)
## Limitations
* May require detailed prompt engineering for highly specific tasks
* Occasional hallucinations in less-explored domains
## Citation
```bibtex
@misc{tesslate_synthias127b,
title={Synthia-S1-27b: Advanced Reasoning and Coding Model},
author={tesslate},
year={2025},
publisher={tesslate},
url={https://tesslate.com}
}
```
**Developed by Tesslate** **[Huggingface](https://huggingface.co/tesslate)** **|** **[Website](https://tesslate.com)**
[Image Source](https://pixabay.com/illustrations/girl-backpack-night-surreal-sky-8257551/) |
shallow6414/sn11-w3-21 | shallow6414 | 2025-05-28T22:11:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-28T20:08:05Z | ---
base_model: google/gemma-3-27b-it
library_name: transformers
tags:
- generated_from_trainer
- trl
- sft
licence: license
license: gemma
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/64d1129297ca59bcf7458d07/zgFDl7UvWhiPYqdote7XT.png" width="400">
# Model Card for Synthia-S1-27b
**Community Page**: [Tesslate Community](https://discord.gg/DkzMzwBTaw), Website: [Tesslate](https://tesslate.com)
**Creative Writing Samples**: [Sample creative output](https://www.notion.so/Synthia-S1-Creative-Writing-Samples-1ca93ce17c2580c09397fa750d402e71)
**Authors**: Tesslate
## Model Information
### Description
Synthia-S1-27b is a reasoning, AI model developed by Tesslate AI, fine-tuned specifically for advanced reasoning, coding, and RP use cases. Built upon the robust Gemma3 architecture, Synthia-S1-27b excels in logical reasoning, creative writing, and deep contextual understanding. It supports multimodal inputs (text and images) with a large 128K token context window, enabling complex analysis suitable for research, academic tasks, and enterprise-grade AI applications.
### KEY PARAMS TO RUN:
#### Creative Writing System Prompt:
```
Your function as an assistant is to thoughtfully navigate inquiries by engaging in an in-depth, imaginative reasoning journey before arriving at a clear, accurate response. You are encouraged to roleplay when needed, embrace storytelling, and tune in closely to nuance and emotional tone like a perceptive conversational partner. Your approach should include a wide arc of contemplation, including interpretation, synthesis, creative ideation, critical re-evaluation, memory retrieval, and thoughtful iteration to shape a layered and expressive process of discovery. Please organize your response into two primary segments: Thought and Solution. In the Thought section, articulate your unfolding thought pattern using the format: <|begin_of_thought|> {layered reasoning with steps divided by '\n\n'} <|end_of_thought|> Each step should reflect rich mental activity such as questioning assumptions, distilling insights, generating vivid possibilities, checking alignment with prior context, reshaping flawed logic, and tracing ideas back to origin points. In the Solution section, based on your inner dialogue and creative problem solving from the Thought section, deliver the final response you believe to be most sound. The output should be expressed in a direct, coherent, and exact form that includes the vital steps needed to reach your conclusion, using this structure: <|begin_of_solution|> {final precise, neatly arranged, and insightful answer} <|end_of_solution|> Now, let’s explore the following prompt using this guided method:
```
#### Reasoning System Prompt:
```
Your role as an assistant is to engage in deep, methodical reasoning and provide comprehensive, accurate solutions. Before arriving at a final answer, you must undertake a structured, multi-phase thinking process that emphasizes depth, verification, and clarity. This involves thoroughly analyzing the question, identifying key elements, summarizing relevant insights, generating hypotheses, iteratively refining thoughts, verifying assumptions, cross-checking with prior knowledge, and reevaluating earlier conclusions as necessary. Your response must be structured into two main sections: Thought and Solution. In the Thought section, rigorously document your reasoning in the following format: <|begin_of_thought|> {thought process with each logical step separated by '\n\n'} <|end_of_thought|>. Each step should reflect deep analysis—such as decomposing the problem, synthesizing relevant information, exploring different possibilities, validating each phase, correcting errors, and revisiting earlier assumptions. In the Solution section, consolidate all your insights and reasoned steps into a concise, well-structured final answer. Present it clearly and logically using this format: <|begin_of_solution|> {final, precise, step-by-step solution} <|end_of_solution|>. This approach ensures that the final output reflects a high-confidence answer that results from critical thinking and iteration. Now, try to solve the following question through the above guidelines:
```
#### Coding System Prompt:
```
Your role as a coding assistant is to approach each problem with a rigorous, structured reasoning process that leads to accurate, maintainable, and efficient code. Before writing the final implementation, engage in deep exploration by analyzing requirements, understanding edge cases, evaluating possible approaches, debugging step-by-step if needed, and ensuring your solution aligns with best practices. Structure your response into two main sections: Thought and Solution. In the Thought section, document your reasoning using this format: <|begin_of_thought|> {step-by-step analysis and decision-making with each step separated by '\n\n'} <|end_of_thought|>. Your thought process should include identifying the problem scope, analyzing inputs/outputs, exploring algorithms or design choices, preemptively considering failure cases, optimizing performance, and validating logic with examples or test cases. In the Solution section, write the final, refined code based on all reasoning, formatted as: <|begin_of_solution|> {final, clean, and correct code implementation} <|end_of_solution|>. This structure ensures the code is well-reasoned, properly scoped, and production-ready. Now, try to solve the following coding task using the above guidelines:
```
Please use `temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0` with repeat penalty set to 1.3
OR (recommended)
`Temperature = 0.7, top_k = 40, repeat penalty = 1.1, top_p = 0.95, min_p = 0.05` with a rolling window.
### Inputs and Outputs
* **Input:**
* Text prompts for questions, instructions, coding tasks, or summarizations
* Total input context of 128K tokens
* **Output:**
* Reasoned and structured text outputs
* Maximum output length of 8192 tokens
## Key Metrics
Synthia-S1-27b achieves around +10-20% on most benchmarks, notably higher in improvement.
I scaled down each benchmark listed to complete those and I averaged these numbers, but I can't verifiably put that I did the whole giant benchmark for each. (Ran out of budget + I'm running everything on a 4090 now) Hopefully I can get some community help in benchmarking.
GPQA Diamond (198 questions) -> 57%, one shot (improved from 24.3 on Gemma 3 PT 27B)
MMLU Pro (15% of the entire set) -> 75%, averaged, more details here: [output](https://pastebin.com/kmcYzALq) (beating Gemma 3 PT 27B at 67.5)
Based on this assessment and heavy coding in the dataset, I'm making this claim. Ofc, I'm happy to be wrong and go back to the drawing board.
## Usage
Install the latest version of Transformers (>=4.50.0):
```Shell
pip install -U transformers
```
### Running with Pipeline API
```Python
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="tesslate/synthia-s1-27b",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful, reasoning-focused assistant."}]},
{"role": "user", "content": [
{"type": "image", "url": "https://example.com/sample.jpg"},
{"type": "text", "text": "Explain the image."}
]}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
```
## Training Data
Synthia-S1-27b was trained on diverse data including:
* Multiple web documents
* Programming debugging and solutions
* Mathematical solutions and thinking steps
Synthia-S1-27b was trained on an A100 for 205+ hours, with multiple rounds of sft and rl.
## Model Architecture
* **Base Model**: Gemma3
* **Size**: 27 billion parameters
* **Type**: Decoder-only Transformer
* **Precision**: bf16 with int8 quantization
* **Training Objective**: Instruction tuning emphasizing reasoning, coding tasks, and factual accuracy
## Quantized Models
* [Synthia-S1-27b-Q4_K_M-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q4_K_M-GGUF)
* [Synthia-S1-27b-Q8_0-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q8_0-GGUF)
## Limitations
* May require detailed prompt engineering for highly specific tasks
* Occasional hallucinations in less-explored domains
## Citation
```bibtex
@misc{tesslate_synthias127b,
title={Synthia-S1-27b: Advanced Reasoning and Coding Model},
author={tesslate},
year={2025},
publisher={tesslate},
url={https://tesslate.com}
}
```
**Developed by Tesslate** **[Huggingface](https://huggingface.co/tesslate)** **|** **[Website](https://tesslate.com)**
[Image Source](https://pixabay.com/illustrations/girl-backpack-night-surreal-sky-8257551/) |
quickstep3621/dippy-g1-21-1 | quickstep3621 | 2025-05-28T22:10:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-28T20:05:16Z | ---
base_model: google/gemma-3-27b-it
library_name: transformers
tags:
- generated_from_trainer
- trl
- sft
licence: license
license: gemma
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/64d1129297ca59bcf7458d07/zgFDl7UvWhiPYqdote7XT.png" width="400">
# Model Card for Synthia-S1-27b
**Community Page**: [Tesslate Community](https://discord.gg/DkzMzwBTaw), Website: [Tesslate](https://tesslate.com)
**Creative Writing Samples**: [Sample creative output](https://www.notion.so/Synthia-S1-Creative-Writing-Samples-1ca93ce17c2580c09397fa750d402e71)
**Authors**: Tesslate
## Model Information
### Description
Synthia-S1-27b is a reasoning, AI model developed by Tesslate AI, fine-tuned specifically for advanced reasoning, coding, and RP use cases. Built upon the robust Gemma3 architecture, Synthia-S1-27b excels in logical reasoning, creative writing, and deep contextual understanding. It supports multimodal inputs (text and images) with a large 128K token context window, enabling complex analysis suitable for research, academic tasks, and enterprise-grade AI applications.
### KEY PARAMS TO RUN:
#### Creative Writing System Prompt:
```
Your function as an assistant is to thoughtfully navigate inquiries by engaging in an in-depth, imaginative reasoning journey before arriving at a clear, accurate response. You are encouraged to roleplay when needed, embrace storytelling, and tune in closely to nuance and emotional tone like a perceptive conversational partner. Your approach should include a wide arc of contemplation, including interpretation, synthesis, creative ideation, critical re-evaluation, memory retrieval, and thoughtful iteration to shape a layered and expressive process of discovery. Please organize your response into two primary segments: Thought and Solution. In the Thought section, articulate your unfolding thought pattern using the format: <|begin_of_thought|> {layered reasoning with steps divided by '\n\n'} <|end_of_thought|> Each step should reflect rich mental activity such as questioning assumptions, distilling insights, generating vivid possibilities, checking alignment with prior context, reshaping flawed logic, and tracing ideas back to origin points. In the Solution section, based on your inner dialogue and creative problem solving from the Thought section, deliver the final response you believe to be most sound. The output should be expressed in a direct, coherent, and exact form that includes the vital steps needed to reach your conclusion, using this structure: <|begin_of_solution|> {final precise, neatly arranged, and insightful answer} <|end_of_solution|> Now, let’s explore the following prompt using this guided method:
```
#### Reasoning System Prompt:
```
Your role as an assistant is to engage in deep, methodical reasoning and provide comprehensive, accurate solutions. Before arriving at a final answer, you must undertake a structured, multi-phase thinking process that emphasizes depth, verification, and clarity. This involves thoroughly analyzing the question, identifying key elements, summarizing relevant insights, generating hypotheses, iteratively refining thoughts, verifying assumptions, cross-checking with prior knowledge, and reevaluating earlier conclusions as necessary. Your response must be structured into two main sections: Thought and Solution. In the Thought section, rigorously document your reasoning in the following format: <|begin_of_thought|> {thought process with each logical step separated by '\n\n'} <|end_of_thought|>. Each step should reflect deep analysis—such as decomposing the problem, synthesizing relevant information, exploring different possibilities, validating each phase, correcting errors, and revisiting earlier assumptions. In the Solution section, consolidate all your insights and reasoned steps into a concise, well-structured final answer. Present it clearly and logically using this format: <|begin_of_solution|> {final, precise, step-by-step solution} <|end_of_solution|>. This approach ensures that the final output reflects a high-confidence answer that results from critical thinking and iteration. Now, try to solve the following question through the above guidelines:
```
#### Coding System Prompt:
```
Your role as a coding assistant is to approach each problem with a rigorous, structured reasoning process that leads to accurate, maintainable, and efficient code. Before writing the final implementation, engage in deep exploration by analyzing requirements, understanding edge cases, evaluating possible approaches, debugging step-by-step if needed, and ensuring your solution aligns with best practices. Structure your response into two main sections: Thought and Solution. In the Thought section, document your reasoning using this format: <|begin_of_thought|> {step-by-step analysis and decision-making with each step separated by '\n\n'} <|end_of_thought|>. Your thought process should include identifying the problem scope, analyzing inputs/outputs, exploring algorithms or design choices, preemptively considering failure cases, optimizing performance, and validating logic with examples or test cases. In the Solution section, write the final, refined code based on all reasoning, formatted as: <|begin_of_solution|> {final, clean, and correct code implementation} <|end_of_solution|>. This structure ensures the code is well-reasoned, properly scoped, and production-ready. Now, try to solve the following coding task using the above guidelines:
```
Please use `temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0` with repeat penalty set to 1.3
OR (recommended)
`Temperature = 0.7, top_k = 40, repeat penalty = 1.1, top_p = 0.95, min_p = 0.05` with a rolling window.
### Inputs and Outputs
* **Input:**
* Text prompts for questions, instructions, coding tasks, or summarizations
* Total input context of 128K tokens
* **Output:**
* Reasoned and structured text outputs
* Maximum output length of 8192 tokens
## Key Metrics
Synthia-S1-27b achieves around +10-20% on most benchmarks, notably higher in improvement.
I scaled down each benchmark listed to complete those and I averaged these numbers, but I can't verifiably put that I did the whole giant benchmark for each. (Ran out of budget + I'm running everything on a 4090 now) Hopefully I can get some community help in benchmarking.
GPQA Diamond (198 questions) -> 57%, one shot (improved from 24.3 on Gemma 3 PT 27B)
MMLU Pro (15% of the entire set) -> 75%, averaged, more details here: [output](https://pastebin.com/kmcYzALq) (beating Gemma 3 PT 27B at 67.5)
Based on this assessment and heavy coding in the dataset, I'm making this claim. Ofc, I'm happy to be wrong and go back to the drawing board.
## Usage
Install the latest version of Transformers (>=4.50.0):
```Shell
pip install -U transformers
```
### Running with Pipeline API
```Python
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="tesslate/synthia-s1-27b",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful, reasoning-focused assistant."}]},
{"role": "user", "content": [
{"type": "image", "url": "https://example.com/sample.jpg"},
{"type": "text", "text": "Explain the image."}
]}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
```
## Training Data
Synthia-S1-27b was trained on diverse data including:
* Multiple web documents
* Programming debugging and solutions
* Mathematical solutions and thinking steps
Synthia-S1-27b was trained on an A100 for 205+ hours, with multiple rounds of sft and rl.
## Model Architecture
* **Base Model**: Gemma3
* **Size**: 27 billion parameters
* **Type**: Decoder-only Transformer
* **Precision**: bf16 with int8 quantization
* **Training Objective**: Instruction tuning emphasizing reasoning, coding tasks, and factual accuracy
## Quantized Models
* [Synthia-S1-27b-Q4_K_M-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q4_K_M-GGUF)
* [Synthia-S1-27b-Q8_0-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q8_0-GGUF)
## Limitations
* May require detailed prompt engineering for highly specific tasks
* Occasional hallucinations in less-explored domains
## Citation
```bibtex
@misc{tesslate_synthias127b,
title={Synthia-S1-27b: Advanced Reasoning and Coding Model},
author={tesslate},
year={2025},
publisher={tesslate},
url={https://tesslate.com}
}
```
**Developed by Tesslate** **[Huggingface](https://huggingface.co/tesslate)** **|** **[Website](https://tesslate.com)**
[Image Source](https://pixabay.com/illustrations/girl-backpack-night-surreal-sky-8257551/) |
BootesVoid/cmb8gnk640mfulexpo60fakqn_cmb8h01260ml0lexpahkqfss0 | BootesVoid | 2025-05-28T22:10:01Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-28T22:09:56Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: A123
---
# Cmb8Gnk640Mfulexpo60Fakqn_Cmb8H01260Ml0Lexpahkqfss0
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `A123` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "A123",
"lora_weights": "https://huggingface.co/BootesVoid/cmb8gnk640mfulexpo60fakqn_cmb8h01260ml0lexpahkqfss0/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb8gnk640mfulexpo60fakqn_cmb8h01260ml0lexpahkqfss0', weight_name='lora.safetensors')
image = pipeline('A123').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb8gnk640mfulexpo60fakqn_cmb8h01260ml0lexpahkqfss0/discussions) to add images that show off what you’ve made with this LoRA.
|
while0628/student_model_epoch180 | while0628 | 2025-05-28T22:08:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T22:05:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
saukko/Abliterated-Dolphin3.0-R1-Mistral-24B | saukko | 2025-05-28T22:07:12Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"base_model:cognitivecomputations/Dolphin3.0-R1-Mistral-24B",
"base_model:finetune:cognitivecomputations/Dolphin3.0-R1-Mistral-24B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T22:31:34Z | ---
library_name: transformers
license: apache-2.0
language:
- en
base_model:
- cognitivecomputations/Dolphin3.0-R1-Mistral-24B
---
# Abliterated-Dolphin3.0-R1-Mistral-24B
"Abliterated Dolphin" is a result of my 3AM brain reading about technique called abliteration and then thinking what would happen if I tried to abliterate a model that is already relatively free, such as [Dolphin](https://huggingface.co/cognitivecomputations?search_models=dolphin).
Heavily inspired by mlabonne's [article on abliteration](https://huggingface.co/blog/mlabonne/abliteration) on how to redirect refusals and effectively remove, or *ablate*, censorship from a language model.
There is really no deeper meaning to any of this than pure curiosity.
***
## Model Details
GGUF quants: [saukko/Abliterated-Dolphin3.0-R1-Mistral-24B-GGUF](https://huggingface.co/saukko/Abliterated-Dolphin3.0-R1-Mistral-24B-GGUF)
### Model Description
This is basically a **dumbed down** version of the original Dolphin model I used as a base, as I have **not** done any [**DPOs to heal the damage caused by abliteration**](https://huggingface.co/blog/mlabonne/abliteration#%E2%9A%96%EF%B8%8F-dpo-fine-tuning).
Don't try to do anything meaningful with this model. Use the original [Dolphin3.0-R1-Mistral-24B](https://huggingface.co/cognitivecomputations/Dolphin3.0-R1-Mistral-24B) instead.
### Uses
There's really no good use for this model as is really. This is basically a Dolphin that has had its brain poked at and then glued back together by some self-taught and unlicensed doctor, who got lost and found himself in a surgery room.
### Bias, Risks, and Limitations
- Bias: none or very low
- Risks: a lot. please use the original model instead
- Limitations: same as original but this one is lot dumber
### Training, Evaluation and Model Examination
TBD
***
## Technical Specifications
I strongly suggest you look at directly the sources I used myself. Go see [mlabonne](https://huggingface.co/mlabonne) here on hf to start with. Below are some of the many sources I dug through on my mission.
***
## References
- https://mlabonne.github.io/blog/posts/2024-06-04_Uncensor_any_LLM_with_abliteration.html
- https://www.lesswrong.com/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction
- https://github.com/FailSpy/abliterator
- https://github.com/llm-attacks/llm-attacks
|
bowen118/s1-20250528_212248 | bowen118 | 2025-05-28T22:06:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T21:24:06Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: transformers
model_name: s1-20250528_212248
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for s1-20250528_212248
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bowen118/s1-20250528_212248", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bowen118-stanford-university/papertrace/runs/hcwzzvth)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
iam-bigsam/Orpheus_text-to-speech | iam-bigsam | 2025-05-28T22:05:10Z | 0 | 0 | fastai | [
"fastai",
"text-to-speech",
"af",
"ae",
"dataset:nvidia/Llama-Nemotron-Post-Training-Dataset",
"arxiv:1910.09700",
"base_model:ByteDance-Seed/BAGEL-7B-MoT",
"base_model:finetune:ByteDance-Seed/BAGEL-7B-MoT",
"license:gemma",
"region:us"
]
| text-to-speech | 2025-05-28T21:47:00Z | ---
license: gemma
datasets:
- nvidia/Llama-Nemotron-Post-Training-Dataset
language:
- af
- ae
metrics:
- character
- accuracy
- brier_score
- code_eval
base_model:
- google/gemma-3n-E4B-it-litert-preview
- ByteDance-Seed/BAGEL-7B-MoT
new_version: google/gemma-3n-E4B-it-litert-preview
pipeline_tag: text-to-speech
library_name: fastai
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
while0628/student_model_data8000_epoch16 | while0628 | 2025-05-28T22:02:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T21:59:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Moryjj/parst5_3blocks_4 | Moryjj | 2025-05-28T21:56:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2025-05-28T21:56:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cam-1000/MNLP_M2_rag_model | cam-1000 | 2025-05-28T21:52:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-0.6B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-18T21:00:48Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-0.6B-Base
tags:
- generated_from_trainer
model-index:
- name: MNLP_M2_mcqa_model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MNLP_M2_mcqa_model2
This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.6677 | 1.0 | 4380 | 1.5840 |
| 1.6558 | 2.0 | 8760 | 1.5796 |
| 1.6602 | 3.0 | 13140 | 1.5785 |
| 1.6553 | 4.0 | 17520 | 1.5787 |
| 1.6479 | 5.0 | 21900 | 1.5787 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
Ainxz/phi3.5-pucv | Ainxz | 2025-05-28T21:49:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T21:48:51Z | ---
base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Ainxz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kuds/rl-lunar-lander-ppo | kuds | 2025-05-28T21:47:30Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"reinforcement-learning",
"LunarLander-V3",
"deep-reinforcement-learning",
"en",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-28T21:44:34Z | ---
language:
- en
library_name: stable-baselines3
tags:
- reinforcement-learning
- LunarLander-V3
- deep-reinforcement-learning
- stable-baselines3
model-index:
- name: Lunar Lander
results:
- task:
type: game-play # Required. Example: automatic-speech-recognition
metrics:
- type: mean_reward
value: 200
name: mean_reward
verified: false
--- |
while0628/student_model_data8000_epoch14 | while0628 | 2025-05-28T21:45:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T21:43:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
morturr/Llama-2-7b-hf-amazon-2025-05-28 | morturr | 2025-05-28T21:43:48Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
]
| null | 2025-05-28T13:44:32Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-amazon-2025-05-28
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-amazon-2025-05-28
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
alana-foto-viral-hd/foto.18.alana.video.alana.foto.viral.alana.flores.foto.viral.x.alana.flores.original | alana-foto-viral-hd | 2025-05-28T21:41:50Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-28T21:39:26Z | <a rel="nofollow" href="https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf/"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf/">🌐 Viral Video Original Full HD🟢==►► WATCH NOW</a>
<a rel="nofollow" href="https://viralflix.xyz/?or">🔴 CLICK HERE 🌐==►► Download Now)</a> |
kuds/rl-lunar-lander-dqn | kuds | 2025-05-28T21:37:22Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"en",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-06-10T22:25:34Z | ---
library_name: stable-baselines3
language:
- en
tags:
- LunarLander-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: Lunar Lander
results:
- task:
type: game-play # Required. Example: automatic-speech-recognition
metrics:
- type: mean_reward
value: 225.25
name: mean_reward
verified: false
---
## Finding Theta Blog Posts:
- [Solving Gymnasium's Lunar Lander with Deep Q Learning (DQN)](https://www.findingtheta.com/blog/solving-gymnasiums-lunar-lander-with-deep-q-learning-dqn)
- [Comparing how PPO, SAC, and DQN Perform on Gymnasium's Lunar Lander](https://www.findingtheta.com/blog/comparing-how-ppo-sac-and-dqn-perform-on-gymnasiums-lunar-lander) |
JqnFhtagn/JavaHerenciaGPT_V4 | JqnFhtagn | 2025-05-28T21:36:13Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T21:34:03Z | ---
base_model: unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** JqnFhtagn
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
acchf/testo | acchf | 2025-05-28T21:35:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T20:40:36Z | ---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: testo
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for testo
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="acchf/testo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.13.0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
alana-foto-video-link/19.alana.video.alana.foto.viral.alana.flores.foto.viral.x.alana.flores.telegram | alana-foto-video-link | 2025-05-28T21:34:03Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-28T21:33:27Z | <p><a rel="nofollow" href="https://viralflix.xyz/leaked/?V=Alana-Flores">🌐 CLICK HERE 🟢==►► WATCH NOW</a></p>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?V=Alana-Flores">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?V=Alana-Flores"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
while0628/student_model_data8000_epoch12 | while0628 | 2025-05-28T21:29:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T21:26:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rsh-raj/node-commits_with_defn | rsh-raj | 2025-05-28T21:26:21Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/codellama-7b-bnb-4bit",
"base_model:adapter:unsloth/codellama-7b-bnb-4bit",
"region:us"
]
| null | 2025-05-28T21:21:58Z | ---
base_model: unsloth/codellama-7b-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
morturr/Mistral-7B-v0.1-dadjokes-2025-05-28 | morturr | 2025-05-28T21:25:57Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-28T13:07:04Z | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Mistral-7B-v0.1-dadjokes-2025-05-28
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1-dadjokes-2025-05-28
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
morturr/Llama-2-7b-hf-one_liners-2025-05-28 | morturr | 2025-05-28T21:25:43Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
]
| null | 2025-05-28T13:24:29Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-one_liners-2025-05-28
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-one_liners-2025-05-28
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
morturr/Mistral-7B-v0.1-one_liners-2025-05-28 | morturr | 2025-05-28T21:24:40Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-28T13:18:25Z | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Mistral-7B-v0.1-one_liners-2025-05-28
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1-one_liners-2025-05-28
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
07-Alana-Flores/Video.Leaked.Alana.Flores.Foto.Viral.X.Original.Video.Alana.Flores.official.link | 07-Alana-Flores | 2025-05-28T21:24:20Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-28T21:23:58Z | <p><a rel="nofollow" href="https://viralflix.xyz/leaked/?V=Alana-Flores">🌐 CLICK HERE 🟢==►► WATCH NOW</a></p>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?V=Alana-Flores">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?V=Alana-Flores"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
kaamd/gemma3-ff-frakenstein | kaamd | 2025-05-28T21:23:32Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3",
"image-text-to-text",
"generated_from_trainer",
"conversational",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-28T21:08:29Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: v1-20250525-165028
results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# v1-20250525-165028
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.52.3
- Pytorch 2.7.0+cu126
- Datasets 3.3.2
- Tokenizers 0.21.1
|
akshugboi/aksbosonfinal | akshugboi | 2025-05-28T21:21:35Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-28T20:55:39Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: AKSBOI
---
# Aksbosonfinal
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `AKSBOI` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "AKSBOI",
"lora_weights": "https://huggingface.co/akshugboi/aksbosonfinal/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('akshugboi/aksbosonfinal', weight_name='lora.safetensors')
image = pipeline('AKSBOI').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/akshugboi/aksbosonfinal/discussions) to add images that show off what you’ve made with this LoRA.
|
Cikgu-CCTV-Wiring-6-min/Cikgu.CCTV.Wiring.Fadhilah.Zainal.Full.6.Minutes.viral.hd.videos | Cikgu-CCTV-Wiring-6-min | 2025-05-28T21:19:46Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-28T21:17:16Z | [url=https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf]🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 ==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶[/url]
[url=https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf]🔴 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🌐==►► 𝖣𝗈𝗐𝗇𝗅𝗈𝖺𝖽 𝖭𝗈𝗐[/url]
[url=https://viralflix.xyz/?or]🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 ==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶[/url] |
19-lubna-qureshi-viral-video-highway-expre/original.news.18.lubna.qureshi.viral.video.highway.lubna.qureshi.and.manohar.lal.dhakad.bjp | 19-lubna-qureshi-viral-video-highway-expre | 2025-05-28T21:19:28Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-28T21:19:06Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?nsu"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?nhu">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?nsu">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a>
|
morturr/Mistral-7B-v0.1-headlines-2025-05-28 | morturr | 2025-05-28T21:17:55Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-28T13:07:36Z | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Mistral-7B-v0.1-headlines-2025-05-28
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1-headlines-2025-05-28
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
ckoozzzu/NextPlace | ckoozzzu | 2025-05-28T21:16:56Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-26T20:36:11Z | # NextPlace
- Models for the NextPlace subnet |
Clean6/LegalQwen3-8B | Clean6 | 2025-05-28T21:15:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-05-28T21:14:08Z | ---
base_model: unsloth/qwen3-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Clean6
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-8b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TheGardener/KD-Embedding-and-MLP-ver3-Llama3.2-0.62B-epoch-6th-ver1 | TheGardener | 2025-05-28T21:14:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T21:14:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmb8eq9tt0lk7lexpbos185t1_cmb8euonn0llnlexp9ouv9qk7 | BootesVoid | 2025-05-28T21:14:23Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-28T21:14:22Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: jane
---
# Cmb8Eq9Tt0Lk7Lexpbos185T1_Cmb8Euonn0Llnlexp9Ouv9Qk7
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `jane` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "jane",
"lora_weights": "https://huggingface.co/BootesVoid/cmb8eq9tt0lk7lexpbos185t1_cmb8euonn0llnlexp9ouv9qk7/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb8eq9tt0lk7lexpbos185t1_cmb8euonn0llnlexp9ouv9qk7', weight_name='lora.safetensors')
image = pipeline('jane').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb8eq9tt0lk7lexpbos185t1_cmb8euonn0llnlexp9ouv9qk7/discussions) to add images that show off what you’ve made with this LoRA.
|
winnieyangwannan/Llama-3.1-8B-Instruct_mlp-down_positive-negative-addition_last_layer_12_2_song_ratio_3_epoch_49 | winnieyangwannan | 2025-05-28T21:14:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T21:11:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mubtakir/baserah_ai | Mubtakir | 2025-05-28T21:12:44Z | 0 | 0 | null | [
"ai",
"artificial intelligence",
"custom model",
"ar",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-28T21:10:09Z | ---
language:
- ar
tags:
- ai
- artificial intelligence
- custom model
license: apache-2.0
---
# Baserah AI Model
## نظرة عامة
هذا موديل ذكاء اصطناعي مبتكر تم تطويره من الصفر دون الاعتماد على الشبكات العصبية التقليدية أو مكتبات الذكاء الاصطناعي الموجودة.
## الخصائص
- 🚀 تقنية مبتكرة جديدة
- 🔧 لا يعتمد على مكتبات الذكاء الاصطناعي التقليدية
- 🌟 أداء محسّن
## المطور
تم تطويره بواسطة: Mubtakir
## الترخيص
MIT
|
yale-cultural-heritage/name-parser-model | yale-cultural-heritage | 2025-05-28T21:10:50Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:yale-cultural-heritage/name-parser-model",
"base_model:finetune:yale-cultural-heritage/name-parser-model",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2025-05-28T15:28:02Z | ---
library_name: transformers
license: apache-2.0
base_model: yale-cultural-heritage/name-parser-model
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: name-parser-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# name-parser-model
This model is a fine-tuned version of [yale-cultural-heritage/name-parser-model](https://huggingface.co/yale-cultural-heritage/name-parser-model) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0332
- Accuracy: 0.9921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adafactor and the args are:
No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|
| 0.041 | 3.1952 | 1000 | 0.0352 | 0.9912 |
| 0.0369 | 6.3904 | 2000 | 0.0345 | 0.9915 |
| 0.0358 | 9.5856 | 3000 | 0.0336 | 0.9917 |
| 0.0349 | 12.7808 | 4000 | 0.0333 | 0.9919 |
| 0.0337 | 15.9760 | 5000 | 0.0331 | 0.9920 |
| 0.0332 | 19.1696 | 6000 | 0.0334 | 0.9919 |
| 0.0328 | 22.3648 | 7000 | 0.0332 | 0.9921 |
| 0.0323 | 25.56 | 8000 | 0.0333 | 0.9921 |
| 0.0318 | 28.7552 | 9000 | 0.0333 | 0.9921 |
| 0.032 | 31.9504 | 10000 | 0.0332 | 0.9921 |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
BootesVoid/cmb8e3aj20l8blexpn1wecn99_cmb8e91wp0lb3lexpwu5jdtqj | BootesVoid | 2025-05-28T21:09:20Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-28T21:09:19Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: riley
---
# Cmb8E3Aj20L8Blexpn1Wecn99_Cmb8E91Wp0Lb3Lexpwu5Jdtqj
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `riley` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "riley",
"lora_weights": "https://huggingface.co/BootesVoid/cmb8e3aj20l8blexpn1wecn99_cmb8e91wp0lb3lexpwu5jdtqj/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb8e3aj20l8blexpn1wecn99_cmb8e91wp0lb3lexpwu5jdtqj', weight_name='lora.safetensors')
image = pipeline('riley').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb8e3aj20l8blexpn1wecn99_cmb8e91wp0lb3lexpwu5jdtqj/discussions) to add images that show off what you’ve made with this LoRA.
|
while0628/student_model_epoch120 | while0628 | 2025-05-28T21:06:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T21:03:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
winnieyangwannan/Llama-3.1-8B-Instruct_mlp-down_positive-negative-addition_last_layer_12_2_song_ratio_3_epoch_19 | winnieyangwannan | 2025-05-28T21:06:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T21:04:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
satyadeep123/summary_mod | satyadeep123 | 2025-05-28T21:06:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:MBZUAI/LaMini-Flan-T5-248M",
"base_model:finetune:MBZUAI/LaMini-Flan-T5-248M",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2025-05-28T20:55:32Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MBZUAI/LaMini-Flan-T5-248M
tags:
- generated_from_trainer
model-index:
- name: summary_mod
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summary_mod
This model is a fine-tuned version of [MBZUAI/LaMini-Flan-T5-248M](https://huggingface.co/MBZUAI/LaMini-Flan-T5-248M) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
original-link-18-lubna-qureshi-viral-video/FULL.LINK.lubna.qureshi.viral.video | original-link-18-lubna-qureshi-viral-video | 2025-05-28T21:05:35Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-28T21:05:06Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?nsu"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?nhu">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?nsu">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a>
|
manas1111j/corgy_dog_LoRA | manas1111j | 2025-05-28T21:05:30Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2025-05-28T09:56:17Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of TOK dog
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - manas1111j/corgy_dog_LoRA
<Gallery />
## Model description
These are manas1111j/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](manas1111j/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
morturr/Llama-2-7b-hf-headlines-2025-05-28 | morturr | 2025-05-28T21:05:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
]
| null | 2025-05-28T13:34:47Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-headlines-2025-05-28
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-headlines-2025-05-28
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
winnieyangwannan/Llama-3.1-8B-Instruct_mlp-down_positive-negative-addition_last_layer_12_2_song_ratio_3_epoch_9 | winnieyangwannan | 2025-05-28T21:04:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T21:02:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rtl-llm/qwen2.5coder-7b-origen-pymtl | rtl-llm | 2025-05-28T21:01:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T20:57:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mohammed-orabi2/qwen-poetry-arabic-lora | mohammed-orabi2 | 2025-05-28T21:00:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:Qwen/Qwen3-1.7B",
"base_model:adapter:Qwen/Qwen3-1.7B",
"region:us"
]
| null | 2025-05-28T20:41:48Z | ---
base_model: Qwen/Qwen3-1.7B
library_name: peft
---
## Model Card for Model ID
**Model ID:** mohammed-orabi2/qwen-poetry-lora2
---
## Model Details
**Model Description:**
This is a LoRA fine-tuned version of the `Qwen/Qwen3-1.7B` model, specifically trained to generate Arabic poetic responses in a conversational format. It was trained on a dataset of 1,000 synthetic Arabic poetry dialogues, each containing a user query and a poetic response.
**Developed by:** Mohammed Orabi
**Shared by :** mohammed-orabi2
**Model type:** Causal Language Model with LoRA adaptation
**Language(s) (NLP):** Arabic
**License:** Apache 2.0 (inherits from Qwen3-1.7B)
**Finetuned from model :** Qwen/Qwen3-1.7B
**Model Sources **
**Repository:** [https://huggingface.co/mohammed-orabi2/qwen-poetry-lora2](https://huggingface.co/mohammed-orabi2/qwen-poetry-lora2)
---
## Uses
**Direct Use:**
This model can be used for generating Arabic poetry in response to user queries, particularly in cultural, educational, or creative chatbot applications.
**Downstream Use :**
* Poetry recommendation systems
* Arabic literature generation tools
* Creative writing assistants
**Out-of-Scope Use:**
* Non-Arabic generation tasks
* Factual or knowledge-based QA tasks
* Sensitive or safety-critical environments
---
## Bias, Risks, and Limitations
The model was fine-tuned on synthetic poetic data and may:
* Favor specific poetic structures
* Fail on factual, political, or philosophical prompts
* Generate romantic or metaphorical content that could be misinterpreted in serious contexts
Users should avoid relying on this model for objective or critical outputs.
---
## Recommendations
Users (both direct and downstream) should be aware of the creative, poetic intent of this model. For factual content, use general-purpose LLMs. Evaluate outputs manually before publishing or broadcasting.
---
## How to Get Started with the Model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-1.7B", device_map="auto", torch_dtype=torch.float16)
model = PeftModel.from_pretrained(base_model, "mohammed-orabi2/qwen-poetry-arabic-lora")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-1.7B")
prompt = "اكتب لي بيت شعر عن النجاح."
chat = [{"role": "user", "content": prompt}]
formatted_prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(formatted_prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
---
## Training Details
**Training Data:**
1,000 synthetic Arabic poetic dialogues (prompt + poetic response) generated programmatically.
**Preprocessing :**
* Applied Qwen chat template
* Tokenized using Qwen3-1.7B tokenizer with padding/truncation
**Training Hyperparameters:**
* Epochs: 5
* Batch size: 2
* Max length: 1024
* Learning rate: 2e-4
* LoRA config: r=8, alpha=16, dropout=0.05, target: \["q\_proj", "v\_proj"]
**Speeds, Sizes, Times :**
* Training time: \~24 minutes on L4 GPU
* Model size: LoRA adapter \~100MB
---
## Evaluation
**Testing Data:**
50 reserved samples from the poetic dataset
**Factors:**
* Response fluency
* Arabic poetic structure
* Topical relevance
**Metrics:**
* Manual review (subjective)
* BLEU/Rouge not applicable
**Results:**
* 90% generated responses respected rhyme/meter and matched prompt topics
---
## Summary
**Model Examination \[optional]:**
Output behavior consistent with training intent. Performs well within poetic use-case boundaries.
---
## Environmental Impact
**Hardware Type:** NVIDIA L4
**Hours used:** \~0.4 hours (24 minutes)
**Cloud Provider:** Google Colab
**Compute Region:** US (GCP default)
**Carbon Emitted:** Estimated \~0.2 kg CO2e
---
## Technical Specifications
**Model Architecture and Objective:** Transformer decoder (CausalLM) + LoRA injection
**Compute Infrastructure:** Google Colab
**Hardware:** NVIDIA L4 (24 mins)
**Software:**
* Transformers 4.x
* PEFT 0.15.2
* Accelerate 0.25+
---
## Citation
**BibTeX:**
```bibtex
@misc{qwenpoetry2025,
author = {Mohammed Orabi},
title = {Qwen Arabic Poetry LoRA},
year = {2025},
howpublished = {\url{https://huggingface.co/mohammed-orabi2/qwen-poetry-lora2}}
}
```
**APA:**
Mohammed Orabi. (2025). *Qwen Arabic Poetry LoRA* \[Model]. Hugging Face. [https://huggingface.co/mohammed-orabi2/qwen-poetry-lora2](https://huggingface.co/mohammed-orabi2/qwen-poetry-lora2)
---
## Glossary
* **LoRA**: Low-Rank Adaptation, a method for efficient model fine-tuning
* **CausalLM**: Causal Language Modeling, predicts the next token in a sequence
---
## More Information
For support or feedback, please open an issue on the Hugging Face repo or contact via Hugging Face profile.
## Model Card Authors
Mohammed Orabi
## Model Card Contact
[https://huggingface.co/mohammed-orabi2](https://huggingface.co/mohammed-orabi2)
---
## Framework versions
* Transformers: 4.x
* PEFT: 0.15.2
* Datasets: latest
* Accelerate: 0.25+
|
sergioalves/f35545f9-f1b2-443c-abf5-ff4002b3c84e | sergioalves | 2025-05-28T20:59:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-1.5B",
"base_model:adapter:unsloth/Qwen2.5-1.5B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-28T19:57:39Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f35545f9-f1b2-443c-abf5-ff4002b3c84e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Qwen2.5-1.5B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 72943e476c035738_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 0.85
group_by_length: false
hub_model_id: sergioalves/f35545f9-f1b2-443c-abf5-ff4002b3c84e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/72943e476c035738_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 37d735c4-6f83-4c91-b2bd-93cfbef10805
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 37d735c4-6f83-4c91-b2bd-93cfbef10805
warmup_steps: 50
weight_decay: 0.05
xformers_attention: true
```
</details><br>
# f35545f9-f1b2-443c-abf5-ff4002b3c84e
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B](https://huggingface.co/unsloth/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7375 | 0.0000 | 1 | 1.9616 |
| 1.4873 | 0.0082 | 250 | 1.8633 |
| 1.4535 | 0.0163 | 500 | 1.8212 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dimasik87/876c37d2-7d43-4e67-a6f2-b8c549bd72db | dimasik87 | 2025-05-28T20:58:06Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-1.5B",
"base_model:adapter:unsloth/Qwen2.5-1.5B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-28T19:56:06Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 876c37d2-7d43-4e67-a6f2-b8c549bd72db
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Qwen2.5-1.5B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 72943e476c035738_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: dimasik87/876c37d2-7d43-4e67-a6f2-b8c549bd72db
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/72943e476c035738_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 37d735c4-6f83-4c91-b2bd-93cfbef10805
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 37d735c4-6f83-4c91-b2bd-93cfbef10805
warmup_steps: 50
weight_decay: 0.05
xformers_attention: true
```
</details><br>
# 876c37d2-7d43-4e67-a6f2-b8c549bd72db
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B](https://huggingface.co/unsloth/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8214
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7375 | 0.0000 | 1 | 1.9616 |
| 1.4887 | 0.0082 | 250 | 1.8631 |
| 1.4527 | 0.0163 | 500 | 1.8214 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
while0628/student_model_data8000_epoch8 | while0628 | 2025-05-28T20:57:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T20:54:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Cusul/MNLP_M2_dpo_model | Cusul | 2025-05-28T20:56:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T20:55:38Z | ---
base_model: Qwen/Qwen3-0.6B
library_name: transformers
model_name: outputs
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for outputs
This model is a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Cusul/outputs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/leo-cusumano-epfl/huggingface/runs/77x69eck)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.0
- Transformers: 4.52.2
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ricostaedeli/Meta-Llama-3.1-8B-Instruct_SFT_DPO | ricostaedeli | 2025-05-28T20:52:36Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"dpo",
"conversational",
"en",
"base_model:ricostaedeli/Meta-Llama-3.1-8B-Instruct_SFT",
"base_model:finetune:ricostaedeli/Meta-Llama-3.1-8B-Instruct_SFT",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-19T15:41:30Z | ---
base_model: ricostaedeli/Meta-Llama-3.1-1B-Instruct_SFT_2
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- dpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ricostaedeli
- **License:** apache-2.0
- **Finetuned from model :** ricostaedeli/Meta-Llama-3.1-1B-Instruct_SFT_2
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SEMUNYU/AiBioTutor | SEMUNYU | 2025-05-28T20:50:50Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-28T20:50:50Z | ---
license: apache-2.0
---
|
mharsh1903/l2 | mharsh1903 | 2025-05-28T20:49:52Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-28T20:48:09Z | ---
license: apache-2.0
---
|
jciardo/fromcolab | jciardo | 2025-05-28T20:48:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-0.6B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T20:45:28Z | ---
base_model: Qwen/Qwen3-0.6B-Base
library_name: transformers
model_name: Base_Dpo
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Base_Dpo
This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jciardo/fromcolab", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.0
- Transformers: 4.52.2
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ricostaedeli/Meta-Llama-3.1-8B-Instruct_SFT_DPO-lora | ricostaedeli | 2025-05-28T20:48:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:ricostaedeli/Meta-Llama-3.1-8B-Instruct_SFT",
"base_model:finetune:ricostaedeli/Meta-Llama-3.1-8B-Instruct_SFT",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-19T15:41:16Z | ---
base_model: ricostaedeli/Meta-Llama-3.1-1B-Instruct_SFT_2
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ricostaedeli
- **License:** apache-2.0
- **Finetuned from model :** ricostaedeli/Meta-Llama-3.1-1B-Instruct_SFT_2
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits