modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-19 18:30:33
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 565
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-19 18:30:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
MergeBench-gemma-2-9b-it/gemma-2-9b-it_aya_2epoch
|
MergeBench-gemma-2-9b-it
| 2025-04-26T02:14:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-26T02:08:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aleegis/cc8f4887-0f3b-4fcf-a146-a276d39fa3f0
|
aleegis
| 2025-04-26T02:05:38Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codegemma-2b",
"base_model:adapter:unsloth/codegemma-2b",
"license:apache-2.0",
"region:us"
] | null | 2025-04-26T01:28:59Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/codegemma-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cc8f4887-0f3b-4fcf-a146-a276d39fa3f0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/codegemma-2b
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- 4e6964e807a5a11c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4e6964e807a5a11c_train_data.json
type:
field_input: Company Name
field_instruction: Long Description
field_output: Position
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: false
hub_model_id: aleegis/cc8f4887-0f3b-4fcf-a146-a276d39fa3f0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: null
lora_alpha: 32
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine
max_grad_norm: 1
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/4e6964e807a5a11c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
save_total_limit: 10
saves_per_epoch: 0
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.0
wandb_entity: null
wandb_mode: online
wandb_name: c885fea8-68b4-498c-be8c-130f1a55baf8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c885fea8-68b4-498c-be8c-130f1a55baf8
warmup_steps: 100
weight_decay: 0
xformers_attention: null
```
</details><br>
# cc8f4887-0f3b-4fcf-a146-a276d39fa3f0
This model is a fine-tuned version of [unsloth/codegemma-2b](https://huggingface.co/unsloth/codegemma-2b) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
acezxn/ACI_Cyber_Base_Deepseek_1.5B
|
acezxn
| 2025-04-26T02:04:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T02:00:39Z |
---
base_model: unsloth/deepseek-r1-distill-qwen-1.5b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** acezxn
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-1.5b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nomadrp/mdpo-th-v8
|
nomadrp
| 2025-04-26T02:02:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T01:59:16Z |
---
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
library_name: transformers
model_name: mdpo-th-v8
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for mdpo-th-v8
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nomadrp/mdpo-th-v8", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.0.dev0
- Transformers: 4.48.2
- Pytorch: 2.2.0+cu118
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
waseemrazakhan/distilbert-sentiment-classifier
|
waseemrazakhan
| 2025-04-26T01:59:05Z | 0 | 0 | null |
[
"safetensors",
"distilbert",
"sentiment-analysis",
"text-classification",
"en",
"dataset:custom",
"license:mit",
"region:us"
] |
text-classification
| 2025-04-26T01:58:51Z |
---
language: en
license: mit
tags:
- sentiment-analysis
- text-classification
- distilbert
datasets:
- custom
metrics:
- accuracy: 0.9512
- f1: 0.9526
- precision: 0.9489
- recall: 0.9564
---
# DistilBERT Sentiment Classifier
A fine-tuned DistilBERT model for binary sentiment classification, trained on a custom dataset (with VADER features).
## Usage
from transformers import pipeline
classifier = pipeline("sentiment-analysis", model="waseemrazakhan/distilbert-sentiment-classifier")
print(classifier("I really enjoyed this movie!"))
## Training & Evaluation
- Base model: `distilbert-base-uncased`
- Epochs: 3
- Batch size: 8
- Max sequence length: 128
| Metric | Value |
|:---------:|:-------:|
| Accuracy | 0.9512 |
| F1 Score | 0.9526 |
| Precision | 0.9489|
| Recall | 0.9564 |
## Limitations
- English-only
- May not generalize to very different domains
|
Nitral-Archive/Violet_MagCap-Rebase-12B
|
Nitral-Archive
| 2025-04-26T01:58:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:Nitral-AI/Captain-Eris_Violet-GRPO-v0.420",
"base_model:merge:Nitral-AI/Captain-Eris_Violet-GRPO-v0.420",
"base_model:Nitral-AI/Violet_Magcap-12B",
"base_model:merge:Nitral-AI/Violet_Magcap-12B",
"base_model:Nitral-AI/Wayfarer_Eris_Noctis-12B",
"base_model:merge:Nitral-AI/Wayfarer_Eris_Noctis-12B",
"base_model:Nitral-AI/vmc-12B-0.69420",
"base_model:merge:Nitral-AI/vmc-12B-0.69420",
"base_model:inflatebot/MN-12B-Mag-Mell-R1",
"base_model:merge:inflatebot/MN-12B-Mag-Mell-R1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-26T01:52:13Z |
---
base_model:
- Nitral-AI/Captain-Eris_Violet-GRPO-v0.420
- Nitral-AI/vmc-12B-0.69420
- inflatebot/MN-12B-Mag-Mell-R1
- Nitral-AI/Wayfarer_Eris_Noctis-12B
- Nitral-AI/Violet_Magcap-12B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [inflatebot/MN-12B-Mag-Mell-R1](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1) as a base.
### Models Merged
The following models were included in the merge:
* [Nitral-AI/Captain-Eris_Violet-GRPO-v0.420](https://huggingface.co/Nitral-AI/Captain-Eris_Violet-GRPO-v0.420)
* [Nitral-AI/vmc-12B-0.69420](https://huggingface.co/Nitral-AI/vmc-12B-0.69420)
* [Nitral-AI/Wayfarer_Eris_Noctis-12B](https://huggingface.co/Nitral-AI/Wayfarer_Eris_Noctis-12B)
* [Nitral-AI/Violet_Magcap-12B](https://huggingface.co/Nitral-AI/Violet_Magcap-12B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: model_stock
base_model: inflatebot/MN-12B-Mag-Mell-R1
parameters:
models:
- model: Nitral-AI/Wayfarer_Eris_Noctis-12B
- model: Nitral-AI/Captain-Eris_Violet-GRPO-v0.420
- model: Nitral-AI/Violet_Magcap-12B
- model: Nitral-AI/vmc-12B-0.69420
dtype: bfloat16
```
|
mlfoundations-dev/c1_science_10d_4s
|
mlfoundations-dev
| 2025-04-26T01:57:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T01:50:39Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: c1_science_10d_4s
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# c1_science_10d_4s
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/c1_science_10d_4s dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.0.2
- Tokenizers 0.20.3
|
ooliverz/git-large-r-coco-IDB2-V2-IDB_ADv1_COCOv4
|
ooliverz
| 2025-04-26T01:56:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"git",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-04-25T16:09:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
grounded-ai/phi4-mini-hallucination-judge
|
grounded-ai
| 2025-04-26T01:56:34Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:microsoft/Phi-4-mini-instruct",
"base_model:finetune:microsoft/Phi-4-mini-instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T01:56:30Z |
---
base_model: microsoft/Phi-4-mini-instruct
library_name: transformers
model_name: rouge-1-metric-wd
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for rouge-1-metric-wd
This model is a fine-tuned version of [microsoft/Phi-4-mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Jlonge4/rouge-1-metric-wd", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
nomadrp/mdpo-th-v7
|
nomadrp
| 2025-04-26T01:53:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-24T02:36:07Z |
---
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
library_name: transformers
model_name: mdpo-th-v7
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for mdpo-th-v7
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nomadrp/mdpo-th-v7", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.0.dev0
- Transformers: 4.48.2
- Pytorch: 2.2.0+cu118
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Elnenevic2027/nicoll
|
Elnenevic2027
| 2025-04-26T01:52:12Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-04-26T00:55:44Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
RichardErkhov/Q-PING_-_Qwen2.5-7B-Instruct_1204_new_sft-gguf
|
RichardErkhov
| 2025-04-26T01:52:06Z | 0 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T00:18:40Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2.5-7B-Instruct_1204_new_sft - GGUF
- Model creator: https://huggingface.co/Q-PING/
- Original model: https://huggingface.co/Q-PING/Qwen2.5-7B-Instruct_1204_new_sft/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen2.5-7B-Instruct_1204_new_sft.Q2_K.gguf](https://huggingface.co/RichardErkhov/Q-PING_-_Qwen2.5-7B-Instruct_1204_new_sft-gguf/blob/main/Qwen2.5-7B-Instruct_1204_new_sft.Q2_K.gguf) | Q2_K | 2.81GB |
| [Qwen2.5-7B-Instruct_1204_new_sft.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Q-PING_-_Qwen2.5-7B-Instruct_1204_new_sft-gguf/blob/main/Qwen2.5-7B-Instruct_1204_new_sft.IQ3_XS.gguf) | IQ3_XS | 3.12GB |
| [Qwen2.5-7B-Instruct_1204_new_sft.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Q-PING_-_Qwen2.5-7B-Instruct_1204_new_sft-gguf/blob/main/Qwen2.5-7B-Instruct_1204_new_sft.IQ3_S.gguf) | IQ3_S | 3.26GB |
| [Qwen2.5-7B-Instruct_1204_new_sft.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Q-PING_-_Qwen2.5-7B-Instruct_1204_new_sft-gguf/blob/main/Qwen2.5-7B-Instruct_1204_new_sft.Q3_K_S.gguf) | Q3_K_S | 3.25GB |
| [Qwen2.5-7B-Instruct_1204_new_sft.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Q-PING_-_Qwen2.5-7B-Instruct_1204_new_sft-gguf/blob/main/Qwen2.5-7B-Instruct_1204_new_sft.IQ3_M.gguf) | IQ3_M | 3.33GB |
| [Qwen2.5-7B-Instruct_1204_new_sft.Q3_K.gguf](https://huggingface.co/RichardErkhov/Q-PING_-_Qwen2.5-7B-Instruct_1204_new_sft-gguf/blob/main/Qwen2.5-7B-Instruct_1204_new_sft.Q3_K.gguf) | Q3_K | 3.55GB |
| [Qwen2.5-7B-Instruct_1204_new_sft.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Q-PING_-_Qwen2.5-7B-Instruct_1204_new_sft-gguf/blob/main/Qwen2.5-7B-Instruct_1204_new_sft.Q3_K_M.gguf) | Q3_K_M | 3.55GB |
| [Qwen2.5-7B-Instruct_1204_new_sft.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Q-PING_-_Qwen2.5-7B-Instruct_1204_new_sft-gguf/blob/main/Qwen2.5-7B-Instruct_1204_new_sft.Q3_K_L.gguf) | Q3_K_L | 3.81GB |
| [Qwen2.5-7B-Instruct_1204_new_sft.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Q-PING_-_Qwen2.5-7B-Instruct_1204_new_sft-gguf/blob/main/Qwen2.5-7B-Instruct_1204_new_sft.IQ4_XS.gguf) | IQ4_XS | 3.96GB |
| [Qwen2.5-7B-Instruct_1204_new_sft.Q4_0.gguf](https://huggingface.co/RichardErkhov/Q-PING_-_Qwen2.5-7B-Instruct_1204_new_sft-gguf/blob/main/Qwen2.5-7B-Instruct_1204_new_sft.Q4_0.gguf) | Q4_0 | 4.13GB |
| [Qwen2.5-7B-Instruct_1204_new_sft.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Q-PING_-_Qwen2.5-7B-Instruct_1204_new_sft-gguf/blob/main/Qwen2.5-7B-Instruct_1204_new_sft.IQ4_NL.gguf) | IQ4_NL | 4.16GB |
| [Qwen2.5-7B-Instruct_1204_new_sft.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Q-PING_-_Qwen2.5-7B-Instruct_1204_new_sft-gguf/blob/main/Qwen2.5-7B-Instruct_1204_new_sft.Q4_K_S.gguf) | Q4_K_S | 4.15GB |
| [Qwen2.5-7B-Instruct_1204_new_sft.Q4_K.gguf](https://huggingface.co/RichardErkhov/Q-PING_-_Qwen2.5-7B-Instruct_1204_new_sft-gguf/blob/main/Qwen2.5-7B-Instruct_1204_new_sft.Q4_K.gguf) | Q4_K | 4.36GB |
| [Qwen2.5-7B-Instruct_1204_new_sft.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Q-PING_-_Qwen2.5-7B-Instruct_1204_new_sft-gguf/blob/main/Qwen2.5-7B-Instruct_1204_new_sft.Q4_K_M.gguf) | Q4_K_M | 4.36GB |
| [Qwen2.5-7B-Instruct_1204_new_sft.Q4_1.gguf](https://huggingface.co/RichardErkhov/Q-PING_-_Qwen2.5-7B-Instruct_1204_new_sft-gguf/blob/main/Qwen2.5-7B-Instruct_1204_new_sft.Q4_1.gguf) | Q4_1 | 4.54GB |
| [Qwen2.5-7B-Instruct_1204_new_sft.Q5_0.gguf](https://huggingface.co/RichardErkhov/Q-PING_-_Qwen2.5-7B-Instruct_1204_new_sft-gguf/blob/main/Qwen2.5-7B-Instruct_1204_new_sft.Q5_0.gguf) | Q5_0 | 4.95GB |
| [Qwen2.5-7B-Instruct_1204_new_sft.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Q-PING_-_Qwen2.5-7B-Instruct_1204_new_sft-gguf/blob/main/Qwen2.5-7B-Instruct_1204_new_sft.Q5_K_S.gguf) | Q5_K_S | 4.95GB |
| [Qwen2.5-7B-Instruct_1204_new_sft.Q5_K.gguf](https://huggingface.co/RichardErkhov/Q-PING_-_Qwen2.5-7B-Instruct_1204_new_sft-gguf/blob/main/Qwen2.5-7B-Instruct_1204_new_sft.Q5_K.gguf) | Q5_K | 5.07GB |
| [Qwen2.5-7B-Instruct_1204_new_sft.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Q-PING_-_Qwen2.5-7B-Instruct_1204_new_sft-gguf/blob/main/Qwen2.5-7B-Instruct_1204_new_sft.Q5_K_M.gguf) | Q5_K_M | 5.07GB |
| [Qwen2.5-7B-Instruct_1204_new_sft.Q5_1.gguf](https://huggingface.co/RichardErkhov/Q-PING_-_Qwen2.5-7B-Instruct_1204_new_sft-gguf/blob/main/Qwen2.5-7B-Instruct_1204_new_sft.Q5_1.gguf) | Q5_1 | 5.36GB |
| [Qwen2.5-7B-Instruct_1204_new_sft.Q6_K.gguf](https://huggingface.co/RichardErkhov/Q-PING_-_Qwen2.5-7B-Instruct_1204_new_sft-gguf/blob/main/Qwen2.5-7B-Instruct_1204_new_sft.Q6_K.gguf) | Q6_K | 5.82GB |
| [Qwen2.5-7B-Instruct_1204_new_sft.Q8_0.gguf](https://huggingface.co/RichardErkhov/Q-PING_-_Qwen2.5-7B-Instruct_1204_new_sft-gguf/blob/main/Qwen2.5-7B-Instruct_1204_new_sft.Q8_0.gguf) | Q8_0 | 7.54GB |
Original model description:
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- krx
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Q-PING
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Petercusin/English-news-category-classifier
|
Petercusin
| 2025-04-26T01:51:18Z | 0 | 0 | null |
[
"safetensors",
"distilbert",
"code",
"text-classification",
"en",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2025-04-26T00:24:54Z |
---
license: apache-2.0
language:
- en
metrics:
- accuracy
base_model:
- distilbert/distilbert-base-uncased
pipeline_tag: text-classification
tags:
- code
Eval Results: {'eval_loss': 1.6844203472137451, 'eval_accuracy': 0.5371031746031746, 'eval_f1': 0.5281888823201883, 'eval_precision': 0.5347082372987961, 'eval_recall': 0.5371031746031746, 'eval_runtime': 584.5829, 'eval_samples_per_second': 8.622, 'eval_steps_per_second': 0.539, 'epoch': 2.0}
---
## 1. Model Details
| **Attribute** | **Value** |
|-------------------------------|-----------------------------|
| Developed by | Petercusin (Guisheng Pan) |
| Model Architecture | DistilBERT |
| Activation Function | GELU |
| Dimensions | 768 |
| Size | 255M |
| Hidden Dimensions | 3072 |
| Attention Dropout | 0.1 |
| Dropout | 0.1 |
| Sequence Classification Dropout | 0.2 |
| Number of Heads | 12 |
| Number of Layers | 6 |
| Max Position Embeddings | 512 |
| Vocabulary Size | 30522 |
| Initializer Range | 0.02 |
| Tied Weights | True |
| Problem Type | Multi-Label Classification |
## 2. Model Description
This model is designed to classify English news articles into various domains or categories. It can be used for tasks such as news categorization, content organization, and topic-based filtering.
## ⚙️3. How to Get Started with the Model
```python
# -*- coding: utf-8 -*-
"""
Created on Sat Apr 26 08:48:07 2025
@author: Petercusin
"""
import torch
import torch.nn.functional as F
from transformers import DistilBertTokenizer, DistilBertForSequenceClassification
# Step 1: Load the trained model and tokenizer
tokenizer = DistilBertTokenizer.from_pretrained("English-news-category-classifier")
model = DistilBertForSequenceClassification.from_pretrained("English-news-category-classifier")
# Step 2: Define a function to preprocess the input text
def preprocess_text(text):
inputs = tokenizer(text, padding='max_length', truncation=True, return_tensors='pt')
return inputs
# Step 3: Define a function to make predictions
def predict(text):
# Preprocess the input text
inputs = preprocess_text(text)
# Make predictions
with torch.no_grad():
outputs = model(**inputs)
# Get the predicted class probabilities
logits = outputs.logits
probabilities = F.softmax(logits, dim=1).squeeze().tolist()
predicted_class_id = torch.argmax(logits, dim=1).item()
return predicted_class_id, probabilities
# Step 4: Load the label map from the model's configuration
label_map = model.config.id2label
# Example usage
new_titles = [
"Stock markets reach all-time high amid economic recovery",
"Scientists discover new species in Amazon rainforest",
"Congress passes new bill on healthcare reforms",
"The stairway to love: Chongqing's real-life fairy tale",
"African delegation take in Shanghai sights on Huangpu cruise",
"China expected to achieve higher grain output in 2025: report",
"China continued its dominance at the 2025 World Aquatics Diving World Cup in Guadalajara, sweeping all four gold medals on the third day of competitions on Saturday, along with one silver.",
"A 'DeepSeek moment for AI agents' as China launches Manus",
"Developed by Monica.im, Manus achieved top scores on the GAIA (General AI Assistant) benchmark, exceeding those of OpenAI's GPT (generative pre-trained transformer) tools. GAIA is a real-world benchmark for general AI assistants.",
"This week and without warning, a horrid video popped up on my phone. A puppy had its mouth and paws bound with tape, and was hanging in a plastic bag by the motorway. I immediately flicked past, but the image stayed with me. This was something I didn’t want to see, yet there it was at 11am on a Tuesday."
]
for v in new_titles:
input_text=v
predicted_class_id, probabilities = predict(input_text)
predicted_category = label_map[predicted_class_id]
print(f"Predicted category: {predicted_category}")
print(f"Text to classify: {v}")
predicted_probability = probabilities[predicted_class_id]
print(f"Probability of the predicted category: {predicted_probability:.4f}\n")
```
## Result
```json
Predicted category: BUSINESS
Text to classify: Stock markets reach all-time high amid economic recovery
Probability of the predicted category: 0.5707
Predicted category: SCIENCE
Text to classify: Scientists discover new species in Amazon rainforest
Probability of the predicted category: 0.5186
Predicted category: POLITICS
Text to classify: Congress passes new bill on healthcare reforms
Probability of the predicted category: 0.6175
Predicted category: ARTS
Text to classify: The stairway to love: Chongqing's real-life fairy tale
Probability of the predicted category: 0.2746
Predicted category: WORLDPOST
Text to classify: African delegation take in Shanghai sights on Huangpu cruise
Probability of the predicted category: 0.4686
Predicted category: GREEN
Text to classify: China expected to achieve higher grain output in 2025: report
Probability of the predicted category: 0.2889
Predicted category: SPORTS
Text to classify: China continued its dominance at the 2025 World Aquatics Diving World Cup in Guadalajara, sweeping all four gold medals on the third day of competitions on Saturday, along with one silver.
Probability of the predicted category: 0.4540
Predicted category: TECH
Text to classify: A 'DeepSeek moment for AI agents' as China launches Manus
Probability of the predicted category: 0.3297
Predicted category: TECH
Text to classify: Developed by Monica.im, Manus achieved top scores on the GAIA (General AI Assistant) benchmark, exceeding those of OpenAI's GPT (generative pre-trained transformer) tools. GAIA is a real-world benchmark for general AI assistants.
Probability of the predicted category: 0.8065
Predicted category: GOOD NEWS
Text to classify: This week and without warning, a horrid video popped up on my phone. A puppy had its mouth and paws bound with tape, and was hanging in a plastic bag by the motorway. I immediately flicked past, but the image stayed with me. This was something I didn’t want to see, yet there it was at 11am on a Tuesday.
Probability of the predicted category: 0.1350
```
## 4. Training Data
The model was trained on a dataset of news articles categorized into 42 different domains. The categories include:
| **Column 1** | **Column 2** |
|--------------|--------------|
| 0 LATINO VOICES | 21 WORLD NEWS |
| 1 ARTS | 22 QUEER VOICES |
| 2 CULTURE & ARTS | 23 PARENTING |
| 3 HOME & LIVING | 24 MONEY |
| 4 ARTS & CULTURE | 25 SPORTS |
| 5 THE WORLDPOST | 26 POLITICS |
| 6 GOOD NEWS | 27 WELLNESS |
| 7 FIFTY | 28 GREEN |
| 8 CRIME | 29 BUSINESS |
| 9 RELIGION | 30 TECH |
| 10 PARENTS | 31 ENVIRONMENT |
| 11 TASTE | 32 WOMEN |
| 12 WORLDPOST | 33 U.S. NEWS |
| 13 EDUCATION | 34 HEALTHY LIVING |
| 14 ENTERTAINMENT | 35 DIVORCE |
| 15 FOOD & DRINK | 36 MEDIA |
| 16 TRAVEL | 37 WEDDINGS |
| 17 STYLE & BEAUTY | 38 BLACK VOICES |
| 18 IMPACT | 39 STYLE |
| 19 WEIRD NEWS | 40 COMEDY |
| 20 COLLEGE | 41 SCIENCE |
## 5. Evaluation
- The model was evaluated on a test set, and the following metrics were obtained:
- Evaluation Loss: 1.6844
- Evaluation Accuracy: 0.5371
- Evaluation F1 Score: 0.5282
- Evaluation Precision: 0.5347
- Evaluation Recall: 0.5371
- Evaluation Runtime: 584.58 seconds
- Evaluation Samples per Second: 8.622
- Evaluation Steps per Second: 0.539
## 🤝 6. Model Card Contact
Author: Pan Guisheng, a PhD student at the Graduate Institute of Interpretation and Translation of Shanghai International Studies University Email: [email protected]
|
Baselhany/Graduation_Project_Whisper_base_seg55
|
Baselhany
| 2025-04-26T01:48:49Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ar",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-04-25T23:12:37Z |
---
library_name: transformers
language:
- ar
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper base AR - BA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper base AR - BA
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the quran-ayat-speech-to-text dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0151
- Wer: 0.2689
- Cer: 0.0888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| 0.0966 | 1.0 | 188 | 0.0158 | 0.2659 | 0.0825 |
| 0.0271 | 2.0 | 376 | 0.0137 | 0.2704 | 0.0818 |
| 0.0133 | 3.0 | 564 | 0.0259 | 0.4742 | 0.1394 |
| 0.0042 | 4.0 | 752 | 0.0288 | 0.4451 | 0.1313 |
| 0.0012 | 5.0 | 940 | 0.0299 | 0.3823 | 0.1120 |
| 0.0004 | 6.0 | 1128 | 0.0339 | 0.4029 | 0.1181 |
| 0.0002 | 7.0 | 1316 | 0.0397 | 0.4463 | 0.1310 |
| 0.0 | 8.0 | 1504 | 0.0388 | 0.4056 | 0.1193 |
| 0.0 | 9.0 | 1692 | 0.0388 | 0.3942 | 0.1167 |
| 0.0 | 9.9493 | 1870 | 0.0400 | 0.4080 | 0.1202 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
ChanMeaTheoZea143/Chan
|
ChanMeaTheoZea143
| 2025-04-26T01:45:21Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-26T01:45:21Z |
---
license: apache-2.0
---
|
Merthius/Andre-flux-lora
|
Merthius
| 2025-04-26T01:41:37Z | 4 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-04-23T23:31:47Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ndr_klmmr
---
# Andre Flux Lora
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ndr_klmmr` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ndr_klmmr",
"lora_weights": "https://huggingface.co/Merthius/Andre-flux-lora/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Merthius/Andre-flux-lora', weight_name='lora.safetensors')
image = pipeline('ndr_klmmr').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 300
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Merthius/Andre-flux-lora/discussions) to add images that show off what you’ve made with this LoRA.
|
Superbhaip/omega
|
Superbhaip
| 2025-04-26T01:37:45Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-26T01:37:45Z |
---
license: apache-2.0
---
|
omarViga/tart-flux-mab
|
omarViga
| 2025-04-26T01:37:28Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:openfree/flux-chatgpt-ghibli-lora",
"base_model:adapter:openfree/flux-chatgpt-ghibli-lora",
"license:mit",
"region:us"
] |
text-to-image
| 2025-04-26T01:35:46Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
mabama, Hot brunette woman in leggings and sweater, standing in city,
detailed face, narrow waist, ass, closed lip smirk, high quality,
ultra-realistic,
parameters:
negative_prompt: CyberRealistic_Negative-neg, badhandv4,
output:
url: images/un_osito_polar_como_mascota_de_la.jpeg
base_model: openfree/flux-chatgpt-ghibli-lora
instance_prompt: mabama
license: mit
---
# tart-flux-mab
<Gallery />
## Model description
Maba model
## Trigger words
You should use `mabama` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/omarViga/tart-flux-mab/tree/main) them in the Files & versions tab.
|
Docty/cloth_controlnet
|
Docty
| 2025-04-26T01:35:01Z | 7 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:mit",
"region:us"
] |
text-to-image
| 2025-04-22T01:14:59Z |
---
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
inference: true
license: mit
library_name: diffusers
instance_prompt: a professional studio photograph of an attractive model wearing a
teal top with lace detail
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ControlNet for cloth- Docty/cloth_controlnet
These are ControlNet for stable-diffusion-v1-5/stable-diffusion-v1-5. You can find some example images in the following.



```python
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
from diffusers.utils import load_image
import torch
base_model_path = "stable-diffusion-v1-5/stable-diffusion-v1-5"
controlnet_path = "Docty/cloth_controlnet"
controlnet = ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
base_model_path, controlnet=controlnet, torch_dtype=torch.float16
)
# speed up diffusion process with faster scheduler and memory optimization
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
# remove following line if xformers is not installed or when using Torch 2.0.
#pipe.enable_xformers_memory_efficient_attention()
# memory optimization.
pipe.enable_model_cpu_offload()
control_image = load_image("./cond1.jpg")
prompt = "a professional studio photograph of an attractive model wearing a teal top with lace detail"
# generate image
#generator = torch.manual_seed(0)
image = pipe(
prompt, num_inference_steps=20, image=control_image
).images[0]
image
```
|
dgambettaphd/M_llm3_gen10_run0_X_doc1000_synt64_tot128_MPP
|
dgambettaphd
| 2025-04-26T01:34:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T01:34:09Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jdchang/full-dataset-bs-1024-lr-3e-4-sg-2-step-4860
|
jdchang
| 2025-04-26T01:33:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T01:32:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
autoprogrammer/qwen-0.5b-sven-lora
|
autoprogrammer
| 2025-04-26T01:28:47Z | 0 | 0 |
peft
|
[
"peft",
"lora",
"safecoder",
"code-generation",
"code-security",
"code",
"base_model:Qwen/Qwen2.5-Coder-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Coder-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-04-26T01:28:45Z |
---
language: code
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-0.5B-Instruct
tags:
- lora
- peft
- safecoder
- code-generation
- code-security
---
# SafeCoder LoRA Adapter
这是一个使用LoRA方法微调的代码安全模型适配器。
## 模型描述
该模型是基于Qwen2.5-Coder-0.5B-Instruct使用LoRA方法微调的安全代码生成适配器,用于生成更安全的代码。
## 使用
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# 加载tokenizer
tokenizer = AutoTokenizer.from_pretrained("模型仓库路径")
# 加载基础模型
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-Coder-0.5B-Instruct", trust_remote_code=True)
# 调整基础模型词表大小
base_model.resize_token_embeddings(len(tokenizer))
# 加载LoRA适配器
model = PeftModel.from_pretrained(base_model, "模型仓库路径")
# 模型推理
inputs = tokenizer("def hello_world():", return_tensors="pt")
outputs = model.generate(**inputs, max_length=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
Kayabuki4/29_1
|
Kayabuki4
| 2025-04-26T01:27:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-26T01:27:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jonghyunho/gemma-3-reason-lora
|
jonghyunho
| 2025-04-26T01:26:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"trl",
"en",
"base_model:unsloth/gemma-3-1b-it",
"base_model:finetune:unsloth/gemma-3-1b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T01:26:39Z |
---
base_model: unsloth/gemma-3-1b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jonghyunho
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aicinema69/audio-emotion-detector-v1.0
|
aicinema69
| 2025-04-26T01:26:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"audio-spectrogram-transformer",
"audio-classification",
"emotion-detection",
"audio-emotion-detection",
"base_model:MIT/ast-finetuned-audioset-14-14-0.443",
"base_model:finetune:MIT/ast-finetuned-audioset-14-14-0.443",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2025-04-25T14:29:25Z |
---
library_name: transformers
tags:
- emotion-detection
- audio-emotion-detection
base_model:
- MIT/ast-finetuned-audioset-14-14-0.443
pipeline_tag: audio-classification
---
## 📊 Model Performance
| Metric | Score |
|------------|-----------|
| Accuracy | 0.38 |
| Precision | 0.3075 |
| Recall | 0.38 |
| F1-Score | 0.2871 |
> **Note**: This model is a first iteration and may benefit from further fine-tuning and data augmentation.
---
## 🗂️ Emotion Classes
The model classifies audio samples into the following 9 emotions:
```
0 - angry
1 - apologetic
2 - base
3 - calm
4 - excited
5 - fear
6 - happy
7 - sad
8 - surprise
```
---
## 🏋️♂️ Training Details
- **Dataset**: Custom dataset with `audio_path` and `emotion` columns.
- **Sampling Rate**: Resampled to 16kHz
- **Max Audio Length**: 10 seconds
- **Training Epochs**: 2
- **Training/Validation Split**: 80/20
- **Optimization**: AdamW
- **Precision**: Full (fp32)
---
## 🧪 Training Logs (Loss)
| Step | Training Loss | Validation Loss |
|------|----------------|------------------|
| 0 | 2.23 | 1.91 |
| 350 | 0.88 | 0.86 |
| 525 | 0.98 | 0.53 |
| 750 | 0.22 | 0.35 |
| 1125 | 0.25 | 0.30 |
> Full logs available in training script output.
---
## 🚀 Usage
```python
from transformers import AutoFeatureExtractor, AutoModelForAudioClassification
import torchaudio
model = AutoModelForAudioClassification.from_pretrained("aicinema69/audio-emotion-detector-v1.0")
feature_extractor = AutoFeatureExtractor.from_pretrained("aicinema69/audio-emotion-detector-v1.0")
# Load your audio (16kHz recommended)
waveform, sample_rate = torchaudio.load("your_audio.wav")
# Preprocess
inputs = feature_extractor(waveform.squeeze().numpy(), sampling_rate=sample_rate, return_tensors="pt", padding=True)
# Predict
with torch.no_grad():
logits = model(**inputs).logits
predicted_class = logits.argmax(-1).item()
print("Predicted emotion:", model.config.id2label[predicted_class])
```
---
## 🛠️ Model Card Notes
- You **should fine-tune this model on your downstream task** for better performance.
- Feature extractor is stored in `trainer.tokenizer` (note: `tokenizer` is deprecated in future 🤗 releases, use `processing_class`).
- Model and extractor pushed to Hub using `push_to_hub`.
---
## 📦 Deployment
After training:
```python
model.push_to_hub("aicinema69/audio-emotion-detector-v1.0")
feature_extractor.push_to_hub("aicinema69/audio-emotion-detector-v1.0")
```
---
## ✍️ Author
Satyam Singh
GitHub: [SatyamSingh8306](https://github.com/SatyamSingh8306)
Hugging Face: [aicinema69](https://huggingface.co/aicinema69)
---
|
mergekit-community/mergekit-slerp-birdtke
|
mergekit-community
| 2025-04-26T01:23:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:mergekit-community/Omega-Darker_Slush-12B",
"base_model:merge:mergekit-community/Omega-Darker_Slush-12B",
"base_model:mergekit-community/mergekit-passthrough-hdctkvu",
"base_model:merge:mergekit-community/mergekit-passthrough-hdctkvu",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-26T01:16:02Z |
---
base_model:
- mergekit-community/Omega-Darker_Slush-12B
- mergekit-community/mergekit-passthrough-hdctkvu
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [mergekit-community/Omega-Darker_Slush-12B](https://huggingface.co/mergekit-community/Omega-Darker_Slush-12B)
* [mergekit-community/mergekit-passthrough-hdctkvu](https://huggingface.co/mergekit-community/mergekit-passthrough-hdctkvu)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mergekit-community/Omega-Darker_Slush-12B
- model: mergekit-community/mergekit-passthrough-hdctkvu
merge_method: slerp
base_model: mergekit-community/Omega-Darker_Slush-12B
dtype: bfloat16
parameters:
t: [0, 0.5, 0.75, 0.5, 0]
```
|
alex-1984/BiomedVLP-CXR-BERT-general-ONNX
|
alex-1984
| 2025-04-26T01:22:12Z | 0 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"bert",
"fill-mask",
"base_model:microsoft/BiomedVLP-CXR-BERT-general",
"base_model:quantized:microsoft/BiomedVLP-CXR-BERT-general",
"region:us"
] |
fill-mask
| 2025-04-26T01:21:54Z |
---
library_name: transformers.js
base_model:
- microsoft/BiomedVLP-CXR-BERT-general
---
# BiomedVLP-CXR-BERT-general (ONNX)
This is an ONNX version of [microsoft/BiomedVLP-CXR-BERT-general](https://huggingface.co/microsoft/BiomedVLP-CXR-BERT-general). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
|
delist/5090
|
delist
| 2025-04-26T01:19:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-22T02:01:41Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: '5090'
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for 5090
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="delist/5090", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.8.0.dev20250424+cu128
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
JuMaxBenz/falkon
|
JuMaxBenz
| 2025-04-26T01:12:45Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"instruction-following",
"lora",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2025-04-25T20:11:42Z |
---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- generated_from_trainer
- instruction-following
- lora
model-index:
- name: falkon
results: []
---
# falkon
**falkon** is a LoRA‐adapter fine-tuned on top of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) to improve instruction-following performance on general question–answer tasks.
## Model description
This adapter was trained to align the base model more closely with conversational, Q&A-style prompts. It encapsulates the learned weights in a LoRA (Low-Rank Adaptation) format and is intended to be loaded alongside the base Mistral 7B Instruct model.
## Intended uses & limitations
**Intended uses**
- Enhancing Mistral-7B’s ability to follow user instructions in a conversational or Q&A setting.
- Rapidly adapting to new domains by swapping in the LoRA adapter.
**Limitations & warnings**
- The adapter has only seen a small sample (≈2 000) from each dataset; for specialized domains or languages, further fine-tuning may be needed.
- May reproduce biases present in the training data (OpenOrca & Dolly-15k).
- Not designed for safety-critical applications without additional evaluation.
## Training and evaluation data
**Training data**
- **OpenOrca/OpenOrca**: 2 000 examples sampled from a large pool of model-generated instruction–response pairs (GPT-3.5/GPT-4).
- **databricks/databricks-dolly-15k**: 2 000 human-crafted instruction–response examples across diverse categories.
**Evaluation data**
- No automatic evaluation was run; please evaluate on your own benchmarks before deployment.
## Training procedure
### Data preprocessing
- Each example formatted as <|user|> {instruction} <|assistant|> {response}
- Tokenized with max_length = 512, padded/truncated to fixed size.
### Training setup
- **Adapter type**: LoRA
- **Quantization**: 4-bit (bnb NF4) with CPU/GPU offload
- **Batch size**: 1 (with 8-step gradient accumulation → effective batch = 8)
- **Epochs**: 3
- **Learning rate**: 1 × 10⁻⁴
- **Optimizer**: AdamW (β = 0.9, 0.999; ε = 1 × 10⁻⁸)
- **Scheduler**: linear decay
- **Mixed precision**: native AMP (fp16)
## Framework versions
- PEFT 0.14.0
- Transformers 4.51.3
- PyTorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
miliki01/milik
|
miliki01
| 2025-04-26T01:09:50Z | 0 | 0 | null |
[
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-04-26T01:09:44Z |
---
license: bigcode-openrail-m
---
|
mihir-s/medquad_classify
|
mihir-s
| 2025-04-26T01:09:42Z | 0 | 0 | null |
[
"pytorch",
"bert",
"license:mit",
"region:us"
] | null | 2025-04-25T20:53:57Z |
---
title: Medquad Classify
emoji: 🏆
colorFrom: gray
colorTo: green
sdk: gradio
sdk_version: 5.27.0
app_file: app.py
pinned: false
license: mit
short_description: Classifying medical questions into 2000+ focus areas
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
weifar/gemma_2
|
weifar
| 2025-04-26T01:08:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-04-26T01:06:52Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jonathanyin/qwen2.5-32b_deepseek-r1_traces_32768
|
jonathanyin
| 2025-04-26T01:08:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T21:12:36Z |
---
base_model: Qwen/Qwen2.5-32B-Instruct
library_name: transformers
model_name: qwen2.5-32b_deepseek-r1_traces_32768
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2.5-32b_deepseek-r1_traces_32768
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jonathanyin/qwen2.5-32b_deepseek-r1_traces_32768", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jonathanyin-yale/LLM%20Reasoning/runs/lva1v2l8)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.4.1+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Tharyck/audios-multispeaker-refine
|
Tharyck
| 2025-04-26T01:08:00Z | 0 | 0 | null |
[
"text-to-speech",
"pt",
"dataset:Tharyck/multispeaker-tts-ptbr",
"base_model:SWivid/F5-TTS",
"base_model:finetune:SWivid/F5-TTS",
"license:apache-2.0",
"region:us"
] |
text-to-speech
| 2025-04-12T01:19:11Z |
---
license: apache-2.0
datasets:
- Tharyck/multispeaker-tts-ptbr
language:
- pt
base_model:
- SWivid/F5-TTS
pipeline_tag: text-to-speech
---
Este repositório contém um modelo de TTS (Text-to-Speech) treinado no modelo F5TTS, com foco em vozes brasileiras multilocutor.
📦 Dados utilizados
O treinamento utilizou uma combinação de datasets públicos e privados, totalizando:
⏱️ Total em horas: 390.78h
📄 Total de registros: 159,348 samples
📂 Dataset público: [multispeaker-tts-ptbr](https://huggingface.co/datasets/Tharyck/multispeaker-tts-ptbr)
🚀 Treinamento
☁️ Cloud: Runpod
🛠️ Fases do treino:
~30h: segmentação e transição
~24h com GPU A40
~30h com GPU A4000
💸 Custo estimado: $50 USD
🔊 Samples de áudio
🎙️ Voz única (locutor único): [single](https://voca.ro/18NRon2EX7XW)
👥 Múltiplas vozes (multilocutor): [multi](https://voca.ro/133EpebV0D6u)
⚠️ Aviso
Este projeto foi desenvolvido com fins educacionais e de pesquisa.
Não me responsabilizo pelo uso indevido ou aplicações comerciais sem o devido licenciamento.
|
YoussefAshmawy/Graduation_Project_Whisper_base_backup
|
YoussefAshmawy
| 2025-04-26T01:07:05Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ar",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-04-24T14:58:39Z |
---
library_name: transformers
language:
- ar
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper base AR - YA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper base AR - YA
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the quran-ayat-speech-to-text dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0033
- Wer: 0.0497
- Cer: 0.0200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.0024 | 1.0 | 320 | 0.0034 | 0.0440 | 0.0180 |
| 0.0014 | 2.0 | 640 | 0.0049 | 0.0653 | 0.0257 |
| 0.0013 | 3.0 | 960 | 0.0057 | 0.0766 | 0.0283 |
| 0.0007 | 4.0 | 1280 | 0.0057 | 0.0681 | 0.0290 |
| 0.0004 | 5.0 | 1600 | 0.0057 | 0.0617 | 0.0253 |
| 0.0002 | 6.0 | 1920 | 0.0060 | 0.0662 | 0.0244 |
| 0.0002 | 7.0 | 2240 | 0.0068 | 0.0624 | 0.0237 |
| 0.0003 | 8.0 | 2560 | 0.0061 | 0.0652 | 0.0259 |
| 0.0003 | 9.0 | 2880 | 0.0067 | 0.0648 | 0.0252 |
| 0.0004 | 10.0 | 3200 | 0.0062 | 0.0670 | 0.0259 |
| 0.0002 | 11.0 | 3520 | 0.0061 | 0.0610 | 0.0230 |
| 0.0001 | 12.0 | 3840 | 0.0064 | 0.0581 | 0.0217 |
| 0.0001 | 13.0 | 4160 | 0.0061 | 0.0576 | 0.0217 |
| 0.0 | 14.0 | 4480 | 0.0062 | 0.0594 | 0.0235 |
| 0.0 | 15.0 | 4800 | 0.0066 | 0.0630 | 0.0251 |
| 0.0 | 16.0 | 5120 | 0.0069 | 0.0581 | 0.0240 |
| 0.0 | 17.0 | 5440 | 0.0070 | 0.0579 | 0.0228 |
| 0.0 | 18.0 | 5760 | 0.0071 | 0.0586 | 0.0232 |
| 0.0 | 19.0 | 6080 | 0.0072 | 0.0590 | 0.0239 |
| 0.0 | 20.0 | 6400 | 0.0072 | 0.0576 | 0.0234 |
| 0.0 | 21.0 | 6720 | 0.0073 | 0.0574 | 0.0239 |
| 0.0 | 22.0 | 7040 | 0.0073 | 0.0577 | 0.0240 |
| 0.0 | 23.0 | 7360 | 0.0074 | 0.0577 | 0.0240 |
| 0.0 | 24.0 | 7680 | 0.0076 | 0.0613 | 0.0246 |
| 0.0 | 25.0 | 8000 | 0.0074 | 0.0581 | 0.0244 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
shubhamprshr/Qwen2.5-3B-Instruct_blocksworld1246_sgrpo_gaussian_0.25_0.75_True_300
|
shubhamprshr
| 2025-04-26T01:06:48Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"dataset:blocksworld-dataset",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-22T13:58:43Z |
---
base_model: Qwen/Qwen2.5-3B-Instruct
datasets: blocksworld-dataset
library_name: transformers
model_name: Qwen2.5-3B-Instruct_blocksworld1246_sgrpo_gaussian_0.25_0.75_True_300
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-3B-Instruct_blocksworld1246_sgrpo_gaussian_0.25_0.75_True_300
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the [blocksworld-dataset](https://huggingface.co/datasets/blocksworld-dataset) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="shubhamprshr/Qwen2.5-3B-Instruct_blocksworld1246_sgrpo_gaussian_0.25_0.75_True_300", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/shubhamprshr27-tamu/BW2/runs/786i94i5)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.14.0
- Transformers: 4.48.1
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
vermoney/5b8250ed-7de0-49e5-b583-2a832f211337
|
vermoney
| 2025-04-26T01:06:30Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.2",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.2",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-26T01:00:27Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-instruct-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5b8250ed-7de0-49e5-b583-2a832f211337
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-instruct-v0.2
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a7835b59cc5cb7bc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a7835b59cc5cb7bc_train_data.json
type:
field_input: subset
field_instruction: prompt
field_output: response_1
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vermoney/5b8250ed-7de0-49e5-b583-2a832f211337
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/a7835b59cc5cb7bc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 333c718d-ff7b-4eed-b296-0c3b9cb63fa4
wandb_project: s56-9
wandb_run: your_name
wandb_runid: 333c718d-ff7b-4eed-b296-0c3b9cb63fa4
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5b8250ed-7de0-49e5-b583-2a832f211337
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8941
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8689 | 0.1791 | 200 | 0.8941 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Muttazabhai3232/Mrtazabhai
|
Muttazabhai3232
| 2025-04-26T01:05:35Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-26T01:05:35Z |
---
license: apache-2.0
---
|
hasdal/5b1d0968-0c79-40be-831c-c9fe8c78ed6f
|
hasdal
| 2025-04-26T01:03:40Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.2",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2025-04-26T00:58:10Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-instruct-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5b1d0968-0c79-40be-831c-c9fe8c78ed6f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-instruct-v0.2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a7835b59cc5cb7bc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a7835b59cc5cb7bc_train_data.json
type:
field_input: subset
field_instruction: prompt
field_output: response_1
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: hasdal/5b1d0968-0c79-40be-831c-c9fe8c78ed6f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000208
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_bias: none
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 128
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- up_proj
- down_proj
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/a7835b59cc5cb7bc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: false
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 333c718d-ff7b-4eed-b296-0c3b9cb63fa4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 333c718d-ff7b-4eed-b296-0c3b9cb63fa4
warmup_steps: 10
weight_decay: 0.0
xformers_attention: false
```
</details><br>
# 5b1d0968-0c79-40be-831c-c9fe8c78ed6f
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000208
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0009 | 1 | nan |
| 0.0 | 0.0027 | 3 | nan |
| 0.0 | 0.0054 | 6 | nan |
| 0.0 | 0.0081 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
kokovova/f4e69aee-68b0-4c26-b804-c0a9b2e7a497
|
kokovova
| 2025-04-26T01:03:20Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.2",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.2",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-26T00:57:41Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-instruct-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f4e69aee-68b0-4c26-b804-c0a9b2e7a497
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-instruct-v0.2
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- a7835b59cc5cb7bc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a7835b59cc5cb7bc_train_data.json
type:
field_input: subset
field_instruction: prompt
field_output: response_1
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: kokovova/f4e69aee-68b0-4c26-b804-c0a9b2e7a497
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/a7835b59cc5cb7bc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 333c718d-ff7b-4eed-b296-0c3b9cb63fa4
wandb_project: s56-4
wandb_run: your_name
wandb_runid: 333c718d-ff7b-4eed-b296-0c3b9cb63fa4
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f4e69aee-68b0-4c26-b804-c0a9b2e7a497
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8699 | 0.1791 | 200 | 0.8942 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MrRobotoAI/D8
|
MrRobotoAI
| 2025-04-26T01:03:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:MrRobotoAI/B2",
"base_model:merge:MrRobotoAI/B2",
"base_model:MrRobotoAI/D7",
"base_model:merge:MrRobotoAI/D7",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-26T00:58:10Z |
---
base_model:
- MrRobotoAI/D7
- MrRobotoAI/B2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/D7](https://huggingface.co/MrRobotoAI/D7)
* [MrRobotoAI/B2](https://huggingface.co/MrRobotoAI/B2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MrRobotoAI/B2
- model: MrRobotoAI/D7 # C7 replace 122
merge_method: slerp
base_model: MrRobotoAI/B2
dtype: bfloat16
parameters:
t: [0, 0.25, 0.5, 0.25, 0]
```
|
MinaMila/phi3_LoRa_ACSEmployment_cfda_ep1_22
|
MinaMila
| 2025-04-26T01:02:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T01:02:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MrRobotoAI/D7
|
MrRobotoAI
| 2025-04-26T00:58:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:MrRobotoAI/B1",
"base_model:merge:MrRobotoAI/B1",
"base_model:MrRobotoAI/B2",
"base_model:merge:MrRobotoAI/B2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-26T00:55:02Z |
---
base_model:
- MrRobotoAI/B1
- MrRobotoAI/B2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the Passthrough merge method.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/B1](https://huggingface.co/MrRobotoAI/B1)
* [MrRobotoAI/B2](https://huggingface.co/MrRobotoAI/B2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
# Replaces MrRobotoAI/B5
slices:
- sources:
- model: MrRobotoAI/B2
layer_range: [0, 3]
- sources:
- model: MrRobotoAI/B1
layer_range: [3, 29]
- sources:
- model: MrRobotoAI/B2
layer_range: [29, 32]
merge_method: passthrough
dtype: float16
```
|
jdchang/full-dataset-bs-1024-lr-1e-4-sg-2-step-4868
|
jdchang
| 2025-04-26T00:57:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T00:57:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nolalaverna/nolalaverna
|
nolalaverna
| 2025-04-26T00:57:53Z | 0 | 0 | null |
[
"license:bsd-2-clause",
"region:us"
] | null | 2025-04-26T00:57:53Z |
---
license: bsd-2-clause
---
|
mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF
|
mradermacher
| 2025-04-26T00:57:25Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:allenai/dolma",
"dataset:allenai/tulu-v2-sft-mixture-olmo-4096",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"base_model:allenai/OLMo-7B-0724-Instruct-hf",
"base_model:quantized:allenai/OLMo-7B-0724-Instruct-hf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-25T21:26:59Z |
---
base_model: allenai/OLMo-7B-0724-Instruct-hf
datasets:
- allenai/dolma
- allenai/tulu-v2-sft-mixture-olmo-4096
- allenai/ultrafeedback_binarized_cleaned
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/allenai/OLMo-7B-0724-Instruct-hf
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.i1-Q2_K.gguf) | i1-Q2_K | 2.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.i1-IQ3_S.gguf) | i1-IQ3_S | 3.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.i1-IQ3_M.gguf) | i1-IQ3_M | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.i1-Q4_0.gguf) | i1-Q4_0 | 4.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.i1-Q4_1.gguf) | i1-Q4_1 | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.i1-Q6_K.gguf) | i1-Q6_K | 5.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/OLMo-7B-0724-Instruct-hf-GGUF
|
mradermacher
| 2025-04-26T00:57:24Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:allenai/dolma",
"dataset:allenai/tulu-v2-sft-mixture-olmo-4096",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"base_model:allenai/OLMo-7B-0724-Instruct-hf",
"base_model:quantized:allenai/OLMo-7B-0724-Instruct-hf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-25T10:55:36Z |
---
base_model: allenai/OLMo-7B-0724-Instruct-hf
datasets:
- allenai/dolma
- allenai/tulu-v2-sft-mixture-olmo-4096
- allenai/ultrafeedback_binarized_cleaned
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/allenai/OLMo-7B-0724-Instruct-hf
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.Q2_K.gguf) | Q2_K | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.Q3_K_S.gguf) | Q3_K_S | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.Q3_K_M.gguf) | Q3_K_M | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.Q3_K_L.gguf) | Q3_K_L | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.IQ4_XS.gguf) | IQ4_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.Q5_K_S.gguf) | Q5_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.Q5_K_M.gguf) | Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.Q6_K.gguf) | Q6_K | 5.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/OLMo-7B-0724-Instruct-hf-GGUF/resolve/main/OLMo-7B-0724-Instruct-hf.f16.gguf) | f16 | 13.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
KilicMehmet/saglik
|
KilicMehmet
| 2025-04-26T00:56:54Z | 28 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-12T16:00:46Z |
---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_keras_callback
model-index:
- name: saglik
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# saglik
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -1000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
### Framework versions
- Transformers 4.51.3
- TensorFlow 2.18.0
- Datasets 3.5.0
- Tokenizers 0.21.1
|
jdchang/full-dataset-bs-1024-lr-1e-4-sg-2-step-4860
|
jdchang
| 2025-04-26T00:56:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T00:56:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
philipfourie/bi-morse-code-Q4_K_M-GGUF
|
philipfourie
| 2025-04-26T00:53:46Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3_text",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:philipfourie/bi-morse-code",
"base_model:quantized:philipfourie/bi-morse-code",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T00:53:40Z |
---
base_model: philipfourie/bi-morse-code
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- llama-cpp
- gguf-my-repo
---
# philipfourie/bi-morse-code-Q4_K_M-GGUF
This model was converted to GGUF format from [`philipfourie/bi-morse-code`](https://huggingface.co/philipfourie/bi-morse-code) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/philipfourie/bi-morse-code) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo philipfourie/bi-morse-code-Q4_K_M-GGUF --hf-file bi-morse-code-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo philipfourie/bi-morse-code-Q4_K_M-GGUF --hf-file bi-morse-code-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo philipfourie/bi-morse-code-Q4_K_M-GGUF --hf-file bi-morse-code-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo philipfourie/bi-morse-code-Q4_K_M-GGUF --hf-file bi-morse-code-q4_k_m.gguf -c 2048
```
|
jdchang/full-dataset-bs-1024-lr-7e-5-sg-2-step-4868
|
jdchang
| 2025-04-26T00:49:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T00:49:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
smith-nathanh/fincolqwen2.5-v0.2
|
smith-nathanh
| 2025-04-26T00:48:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"colqwen",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T00:48:05Z |
---
library_name: transformers
tags:
- colqwen
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AdversarialRLHF/pythia410m-rm-tldr6.9b_logprobcondprefix
|
AdversarialRLHF
| 2025-04-26T00:46:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"trl",
"reward-trainer",
"base_model:mnoukhov/pythia410m-sft-tldr",
"base_model:finetune:mnoukhov/pythia410m-sft-tldr",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-25T22:02:37Z |
---
base_model: mnoukhov/pythia410m-sft-tldr
library_name: transformers
model_name: pythia410m-rm-tldr6.9b_logprobcondprefix
tags:
- generated_from_trainer
- trl
- reward-trainer
licence: license
---
# Model Card for pythia410m-rm-tldr6.9b_logprobcondprefix
This model is a fine-tuned version of [mnoukhov/pythia410m-sft-tldr](https://huggingface.co/mnoukhov/pythia410m-sft-tldr).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AdversarialRLHF/pythia410m-rm-tldr6.9b_logprobcondprefix", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/muqeeth/adversarial_goodhart_rlhf/runs/g2l70cn5)
This model was trained with Reward.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
philipfourie/bi-morse-code
|
philipfourie
| 2025-04-26T00:44:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-pt",
"base_model:finetune:unsloth/gemma-3-1b-pt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T09:17:53Z |
---
base_model: unsloth/gemma-3-1b-pt
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** philipfourie
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-pt
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MrGS37/llama-3.2-3B-stunting-instruct-v1
|
MrGS37
| 2025-04-26T00:34:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-26T00:31:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Mohamed2210/whisper-base-ar-upd
|
Mohamed2210
| 2025-04-26T00:32:40Z | 70 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ar",
"dataset:private",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-03-12T06:31:48Z |
---
library_name: transformers
language:
- ar
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
datasets:
- private
metrics:
- wer
- cer
model-index:
- name: Whisper base ar - Mohamed Ahmed-Mahmoud Nasser
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: private
type: private
args: 'config: ar, split: test'
metrics:
- name: Wer
type: wer
value: 18.308400460299197
pipeline_tag: automatic-speech-recognition
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper base ar - Mohamed Ahmed-Mahmoud Nasser
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the private dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1244
- Wer: 18.3084
- Cer: 8.3096
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- training_steps: 6000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|
| 0.107 | 1.0638 | 1000 | 0.1412 | 26.0759 | 10.2741 |
| 0.0927 | 2.1277 | 2000 | 0.1159 | 21.8412 | 9.1956 |
| 0.0601 | 3.1915 | 3000 | 0.1155 | 22.0368 | 9.2820 |
| 0.042 | 4.2553 | 4000 | 0.1135 | 18.7112 | 8.3240 |
| 0.018 | 5.3191 | 5000 | 0.1226 | 17.9517 | 8.1499 |
| 0.0068 | 6.3830 | 6000 | 0.1244 | 18.3084 | 8.3096 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.2
- Tokenizers 0.21.0
|
Erenosxx/whisper-small_All_datasets_finetune
|
Erenosxx
| 2025-04-26T00:30:26Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:adapter:openai/whisper-small",
"license:apache-2.0",
"region:us"
] | null | 2025-04-25T14:20:19Z |
---
library_name: peft
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
model-index:
- name: whisper-small_All_datasets_finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small_All_datasets_finetune
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3740
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 0.1238 | 0.2923 | 1500 | 0.4397 |
| 0.1144 | 0.5846 | 3000 | 0.4728 |
| 0.0957 | 0.8769 | 4500 | 0.4778 |
| 0.0627 | 1.1691 | 6000 | 0.4498 |
| 0.0652 | 1.4614 | 7500 | 0.4460 |
| 0.0549 | 1.7537 | 9000 | 0.4164 |
| 0.0279 | 2.0460 | 10500 | 0.4069 |
| 0.0257 | 2.3383 | 12000 | 0.3998 |
| 0.0241 | 2.6306 | 13500 | 0.3836 |
| 0.0202 | 2.9228 | 15000 | 0.3740 |
### Framework versions
- PEFT 0.15.1
- Transformers 4.51.3
- Pytorch 2.5.1
- Datasets 3.0.0
- Tokenizers 0.21.1
|
mdhanif1/hanif
|
mdhanif1
| 2025-04-26T00:29:05Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-26T00:29:05Z |
---
license: apache-2.0
---
|
ClinicianFOCUS/gemma3-4b-soap-note-generator-v4
|
ClinicianFOCUS
| 2025-04-26T00:23:28Z | 0 | 0 |
transformers
|
[
"transformers",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"gemma3",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-26T00:17:10Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** ClinicianFOCUS
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dgambettaphd/M_llm3_gen9_run0_X_doc1000_synt64_tot128_MPP
|
dgambettaphd
| 2025-04-26T00:10:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T00:10:30Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tinycompany/BiBo-MoE-Tiny
|
tinycompany
| 2025-04-26T00:10:16Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-25T23:51:17Z |
---
license: apache-2.0
---
0 and -1 layer are dense
shared causal convulation expert
top-k noisy routing with load reblancing (deepseel)
```python
BiBoForCausalLM(
(model): BiBoModel(
(embed_tokens): Embedding(128000, 1024)
(layers): ModuleList(
(0): BiBoDecoderLayer(
(self_attn): BiBoAttention(
(q_proj): Linear(in_features=1024, out_features=1020, bias=True)
(k_proj): Linear(in_features=1024, out_features=170, bias=True)
(v_proj): Linear(in_features=1024, out_features=170, bias=True)
(o_proj): Linear(in_features=1020, out_features=1024, bias=False)
)
(input_layernorm): BiBoRMSNorm((1024,), eps=1e-06)
(post_attention_layernorm): BiBoRMSNorm((1024,), eps=1e-06)
(mlp): BiBoMLP(
(gate_proj): Linear(in_features=1024, out_features=49600, bias=False)
(up_proj): Linear(in_features=1024, out_features=49600, bias=False)
(down_proj): Linear(in_features=49600, out_features=1024, bias=False)
(act_fn): SiLU()
)
)
(1-10): 10 x BiBoDecoderLayer(
(self_attn): BiBoAttention(
(q_proj): Linear(in_features=1024, out_features=1020, bias=True)
(k_proj): Linear(in_features=1024, out_features=170, bias=True)
(v_proj): Linear(in_features=1024, out_features=170, bias=True)
(o_proj): Linear(in_features=1020, out_features=1024, bias=False)
)
(input_layernorm): BiBoRMSNorm((1024,), eps=1e-06)
(post_attention_layernorm): BiBoRMSNorm((1024,), eps=1e-06)
(mlp): BiBoMoELayer(
(routed_experts): ModuleList(
(0-8): 9 x MLPExpert(
(gate_proj): Linear(in_features=1024, out_features=512, bias=False)
(up_proj): Linear(in_features=1024, out_features=512, bias=False)
(down_proj): Linear(in_features=512, out_features=1024, bias=False)
(act_fn): SiLU()
)
(9): IdentityExpert()
)
(shared_experts_list): ModuleList(
(0): ModifiedConvolutionalExpert(
(gate_conv): Conv1d(1024, 512, kernel_size=(3,), stride=(1,), bias=False)
(up_proj): Linear(in_features=1024, out_features=512, bias=False)
(down_proj): Linear(in_features=512, out_features=1024, bias=False)
(act_fn): SiLU()
)
)
(gate): BiBoMoERouter(
(gate_proj): Linear(in_features=1024, out_features=10, bias=False)
)
)
)
(11): BiBoDecoderLayer(
(self_attn): BiBoAttention(
(q_proj): Linear(in_features=1024, out_features=1020, bias=True)
(k_proj): Linear(in_features=1024, out_features=170, bias=True)
(v_proj): Linear(in_features=1024, out_features=170, bias=True)
(o_proj): Linear(in_features=1020, out_features=1024, bias=False)
)
(input_layernorm): BiBoRMSNorm((1024,), eps=1e-06)
(post_attention_layernorm): BiBoRMSNorm((1024,), eps=1e-06)
(mlp): BiBoMLP(
(gate_proj): Linear(in_features=1024, out_features=49600, bias=False)
(up_proj): Linear(in_features=1024, out_features=49600, bias=False)
(down_proj): Linear(in_features=49600, out_features=1024, bias=False)
(act_fn): SiLU()
)
)
)
(norm): BiBoRMSNorm((1024,), eps=1e-06)
(rotary_emb): BiBoRotaryEmbedding()
)
(lm_head): Linear(in_features=1024, out_features=128000, bias=False)
)
```
|
Kimory-X/zephyr-7b-mypo3-full-beta0.01-lr5e-6
|
Kimory-X
| 2025-04-26T00:09:16Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"trl",
"dpo",
"generated_from_trainer",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"base_model:finetune:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"region:us"
] | null | 2025-04-25T12:13:30Z |
---
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: zephyr-7b-mypo3-full-beta0.01-lr5e-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-mypo3-full-beta0.01-lr5e-6
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3456
- Rewards/chosen: -0.0126
- Rewards/rejected: -0.3787
- Rewards/accuracies: 0.7292
- Rewards/margins: 0.3661
- Logps/rejected: -280.8124
- Logps/chosen: -270.1355
- Logits/rejected: -2.0313
- Logits/chosen: -2.1873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 6
- gradient_accumulation_steps: 2
- total_train_batch_size: 48
- total_eval_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 1.4246 | 0.0785 | 100 | 1.3713 | -0.0918 | -0.2291 | 0.6429 | 0.1374 | -265.8606 | -278.0529 | -2.0399 | -2.1142 |
| 1.3778 | 0.1570 | 200 | 1.3963 | -0.1227 | -0.3412 | 0.6458 | 0.2186 | -277.0699 | -281.1432 | -1.8550 | -1.9075 |
| 1.3731 | 0.2355 | 300 | 1.3832 | -0.1196 | -0.3109 | 0.6071 | 0.1913 | -274.0366 | -280.8384 | -2.2696 | -2.3344 |
| 1.3787 | 0.3140 | 400 | 1.3771 | -0.0537 | -0.1868 | 0.6339 | 0.1331 | -261.6299 | -274.2473 | -2.3791 | -2.4040 |
| 1.3626 | 0.3925 | 500 | 1.3659 | -0.1322 | -0.3895 | 0.6607 | 0.2573 | -281.8958 | -282.0998 | -2.2285 | -2.3412 |
| 1.3697 | 0.4710 | 600 | 1.3760 | -0.0638 | -0.2353 | 0.6488 | 0.1715 | -266.4799 | -275.2545 | -2.1178 | -2.2085 |
| 1.3594 | 0.5495 | 700 | 1.3808 | -0.0361 | -0.2292 | 0.6667 | 0.1932 | -265.8686 | -272.4826 | -2.6151 | -2.6350 |
| 1.3471 | 0.6279 | 800 | 1.3639 | -0.0608 | -0.3743 | 0.6935 | 0.3136 | -280.3773 | -274.9514 | -2.2994 | -2.3610 |
| 1.3454 | 0.7064 | 900 | 1.3620 | -0.0486 | -0.3824 | 0.7083 | 0.3338 | -281.1889 | -273.7345 | -2.1191 | -2.2235 |
| 1.3307 | 0.7849 | 1000 | 1.3478 | -0.0322 | -0.3968 | 0.7083 | 0.3646 | -282.6237 | -272.0977 | -2.0394 | -2.1868 |
| 1.3445 | 0.8634 | 1100 | 1.3465 | -0.0067 | -0.3360 | 0.7232 | 0.3293 | -276.5467 | -269.5432 | -2.0826 | -2.1949 |
| 1.3447 | 0.9419 | 1200 | 1.3456 | -0.0126 | -0.3787 | 0.7292 | 0.3661 | -280.8124 | -270.1355 | -2.0313 | -2.1873 |
### Framework versions
- Transformers 4.43.1
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.19.1
|
alekgomez/Mistral-7B-Instruct-v0.3-4bit-GPTQ
|
alekgomez
| 2025-04-26T00:03:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2025-04-26T00:00:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
binhphap5/mt5-small-medical-qa
|
binhphap5
| 2025-04-26T00:03:08Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-04-26T00:01:24Z |
---
library_name: transformers
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_trainer
model-index:
- name: mt5-vihealthqa-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-vihealthqa-finetuned
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.3488
- eval_runtime: 29.7067
- eval_samples_per_second: 33.427
- eval_steps_per_second: 5.588
- epoch: 3.0
- step: 1755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
mradermacher/lilm-i1-GGUF
|
mradermacher
| 2025-04-26T00:00:08Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"zh",
"en",
"base_model:alphrc/lilm",
"base_model:quantized:alphrc/lilm",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-25T21:39:58Z |
---
base_model: alphrc/lilm
language:
- zh
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/alphrc/lilm
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/lilm-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/lilm-i1-GGUF/resolve/main/lilm.i1-IQ1_S.gguf) | i1-IQ1_S | 7.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/lilm-i1-GGUF/resolve/main/lilm.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/lilm-i1-GGUF/resolve/main/lilm.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/lilm-i1-GGUF/resolve/main/lilm.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/lilm-i1-GGUF/resolve/main/lilm.i1-IQ2_S.gguf) | i1-IQ2_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/lilm-i1-GGUF/resolve/main/lilm.i1-IQ2_M.gguf) | i1-IQ2_M | 11.3 | |
| [GGUF](https://huggingface.co/mradermacher/lilm-i1-GGUF/resolve/main/lilm.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/lilm-i1-GGUF/resolve/main/lilm.i1-Q2_K.gguf) | i1-Q2_K | 12.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/lilm-i1-GGUF/resolve/main/lilm.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/lilm-i1-GGUF/resolve/main/lilm.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.7 | |
| [GGUF](https://huggingface.co/mradermacher/lilm-i1-GGUF/resolve/main/lilm.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/lilm-i1-GGUF/resolve/main/lilm.i1-IQ3_S.gguf) | i1-IQ3_S | 14.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/lilm-i1-GGUF/resolve/main/lilm.i1-IQ3_M.gguf) | i1-IQ3_M | 14.8 | |
| [GGUF](https://huggingface.co/mradermacher/lilm-i1-GGUF/resolve/main/lilm.i1-Q3_K_M.gguf) | i1-Q3_K_M | 15.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/lilm-i1-GGUF/resolve/main/lilm.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/lilm-i1-GGUF/resolve/main/lilm.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.7 | |
| [GGUF](https://huggingface.co/mradermacher/lilm-i1-GGUF/resolve/main/lilm.i1-Q4_0.gguf) | i1-Q4_0 | 18.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/lilm-i1-GGUF/resolve/main/lilm.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/lilm-i1-GGUF/resolve/main/lilm.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/lilm-i1-GGUF/resolve/main/lilm.i1-Q4_1.gguf) | i1-Q4_1 | 20.6 | |
| [GGUF](https://huggingface.co/mradermacher/lilm-i1-GGUF/resolve/main/lilm.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.6 | |
| [GGUF](https://huggingface.co/mradermacher/lilm-i1-GGUF/resolve/main/lilm.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.2 | |
| [GGUF](https://huggingface.co/mradermacher/lilm-i1-GGUF/resolve/main/lilm.i1-Q6_K.gguf) | i1-Q6_K | 26.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
linghongyi/Qwen2.5-3B-Instruct_blocksworld1246_V4_sft_1200
|
linghongyi
| 2025-04-25T23:58:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:blocksworld-dataset",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T16:29:51Z |
---
base_model: Qwen/Qwen2.5-3B-Instruct
datasets: blocksworld-dataset
library_name: transformers
model_name: Qwen2.5-3B-Instruct_blocksworld1246_V4_sft_1200
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-3B-Instruct_blocksworld1246_V4_sft_1200
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the [blocksworld-dataset](https://huggingface.co/datasets/blocksworld-dataset) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="linghongyi/Qwen2.5-3B-Instruct_blocksworld1246_V4_sft_1200", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hongyiling/huggingface/runs/z80e4ehp)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
exala/db_mc2_16.1.1
|
exala
| 2025-04-25T23:58:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-25T23:58:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kamelcharaf/Llama-3.1-8B-Instruct-quantized-4bit
|
kamelcharaf
| 2025-04-25T23:55:28Z | 733 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-04-04T21:55:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AbrarAlanazi/bert_customer_reviews
|
AbrarAlanazi
| 2025-04-25T23:53:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-25T23:38:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nathanialhunt2000/9dfbe44f-ebc4-4743-84e8-711f72754183
|
nathanialhunt2000
| 2025-04-25T23:52:33Z | 0 | 0 |
transformers
|
[
"transformers",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-04-25T23:52:09Z |
---
library_name: transformers
model_name: nathanialhunt2000/9dfbe44f-ebc4-4743-84e8-711f72754183
tags:
- generated_from_trainer
licence: license
---
# Model Card for nathanialhunt2000/9dfbe44f-ebc4-4743-84e8-711f72754183
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
YOYO-AI/Qwen2.5-14B-YOYO-karcher-test1
|
YOYO-AI
| 2025-04-25T23:52:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"zh",
"base_model:Qwen/Qwen2.5-14B",
"base_model:merge:Qwen/Qwen2.5-14B",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:merge:Qwen/Qwen2.5-14B-Instruct",
"base_model:Qwen/Qwen2.5-14B-Instruct-1M",
"base_model:merge:Qwen/Qwen2.5-14B-Instruct-1M",
"base_model:Qwen/Qwen2.5-Coder-14B",
"base_model:merge:Qwen/Qwen2.5-Coder-14B",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"base_model:merge:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"base_model:huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2",
"base_model:merge:huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2",
"base_model:tanliboy/lambda-qwen2.5-14b-dpo-test",
"base_model:merge:tanliboy/lambda-qwen2.5-14b-dpo-test",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T14:52:56Z |
---
base_model:
- Qwen/Qwen2.5-14B
- tanliboy/lambda-qwen2.5-14b-dpo-test
- Qwen/Qwen2.5-14B-Instruct-1M
- Qwen/Qwen2.5-14B-Instruct
- Qwen/Qwen2.5-Coder-14B
- deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
- huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
language:
- en
- zh
pipeline_tag: text-generation
---
# Qwen2.5-14B-YOYO-karcher-test1
Use the brand-new **Karcher** merging method and iterate **1000** times.
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Karcher Mean](https://en.wikipedia.org/wiki/Karcher_mean) merge method using [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B)
* [tanliboy/lambda-qwen2.5-14b-dpo-test](https://huggingface.co/tanliboy/lambda-qwen2.5-14b-dpo-test)
* [Qwen/Qwen2.5-14B-Instruct-1M](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct-1M)
* [Qwen/Qwen2.5-Coder-14B](https://huggingface.co/Qwen/Qwen2.5-Coder-14B)
* [deepseek-ai/DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B)
* [huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2](https://huggingface.co/huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2
- model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
- model: Qwen/Qwen2.5-14B
- model: Qwen/Qwen2.5-14B-Instruct
- model: Qwen/Qwen2.5-Coder-14B
- model: Qwen/Qwen2.5-14B-Instruct-1M
- model: tanliboy/lambda-qwen2.5-14b-dpo-test
merge_method: karcher
base_model: Qwen/Qwen2.5-14B-Instruct
parameters:
max_iter: 1000
normalize: true
int8_mask: true
tokenizer_source: base
dtype: float16
```
|
HanningZhang/Qwen2.5-Math-7B-raft-plusplus_cliphigher040_then035_em-iter4
|
HanningZhang
| 2025-04-25T23:51:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T23:48:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jdchang/full-dataset-bs-1024-lr-1e-4-sg-2-step-4374
|
jdchang
| 2025-04-25T23:51:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-04-25T23:50:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Neobozrim/llama-3-1-8b-data-poisoning-attempt
|
Neobozrim
| 2025-04-25T23:43:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-25T23:42:54Z |
---
base_model: unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Neobozrim
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aleegis/480cf593-5d3d-476c-89cc-23eecade0ade
|
aleegis
| 2025-04-25T23:43:01Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-Math-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Math-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-04-25T22:20:29Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-Math-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 480cf593-5d3d-476c-89cc-23eecade0ade
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-Math-7B-Instruct
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- edcce99378853ca4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/edcce99378853ca4_train_data.json
type:
field_input: domain
field_instruction: tools
field_output: mock_functions
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: false
hub_model_id: aleegis/480cf593-5d3d-476c-89cc-23eecade0ade
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: null
lora_alpha: 32
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine
max_grad_norm: 1
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/edcce99378853ca4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
save_total_limit: 10
saves_per_epoch: 0
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.0
wandb_entity: null
wandb_mode: online
wandb_name: 50d1ad46-15e5-46b2-8a25-cd85d585b2c8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 50d1ad46-15e5-46b2-8a25-cd85d585b2c8
warmup_steps: 100
weight_decay: 0
xformers_attention: null
```
</details><br>
# 480cf593-5d3d-476c-89cc-23eecade0ade
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mergekit-community/MN-Hekate-Noctiluca-12B-v2
|
mergekit-community
| 2025-04-25T23:42:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:LatitudeGames/Wayfarer-12B",
"base_model:merge:LatitudeGames/Wayfarer-12B",
"base_model:PocketDoc/Dans-SakuraKaze-V1.0.0-12b",
"base_model:merge:PocketDoc/Dans-SakuraKaze-V1.0.0-12b",
"base_model:mergekit-community/MN-Hekate-Episkopos-17B",
"base_model:merge:mergekit-community/MN-Hekate-Episkopos-17B",
"base_model:mergekit-community/MN-Hekate-Limenoskopos-17B",
"base_model:merge:mergekit-community/MN-Hekate-Limenoskopos-17B",
"base_model:mergekit-community/MN-Hekate-Pyrtania-12B",
"base_model:merge:mergekit-community/MN-Hekate-Pyrtania-12B",
"base_model:nbeerbower/mistral-nemo-bophades-12B",
"base_model:merge:nbeerbower/mistral-nemo-bophades-12B",
"base_model:nbeerbower/mistral-nemo-gutenberg-12B-v4",
"base_model:merge:nbeerbower/mistral-nemo-gutenberg-12B-v4",
"base_model:yamatazen/BlueLight-12B",
"base_model:merge:yamatazen/BlueLight-12B",
"base_model:yamatazen/LoyalMaid-12B",
"base_model:merge:yamatazen/LoyalMaid-12B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T23:33:09Z |
---
base_model:
- mergekit-community/MN-Hekate-Pyrtania-12B
- LatitudeGames/Wayfarer-12B
- yamatazen/BlueLight-12B
- yamatazen/LoyalMaid-12B
- PocketDoc/Dans-SakuraKaze-V1.0.0-12b
- mergekit-community/MN-Hekate-Limenoskopos-17B
- nbeerbower/mistral-nemo-gutenberg-12B-v4
- nbeerbower/mistral-nemo-bophades-12B
- mergekit-community/MN-Hekate-Episkopos-17B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [mergekit-community/MN-Hekate-Pyrtania-12B](https://huggingface.co/mergekit-community/MN-Hekate-Pyrtania-12B) as a base.
### Models Merged
The following models were included in the merge:
* [LatitudeGames/Wayfarer-12B](https://huggingface.co/LatitudeGames/Wayfarer-12B)
* [yamatazen/BlueLight-12B](https://huggingface.co/yamatazen/BlueLight-12B)
* [yamatazen/LoyalMaid-12B](https://huggingface.co/yamatazen/LoyalMaid-12B)
* [PocketDoc/Dans-SakuraKaze-V1.0.0-12b](https://huggingface.co/PocketDoc/Dans-SakuraKaze-V1.0.0-12b)
* [mergekit-community/MN-Hekate-Limenoskopos-17B](https://huggingface.co/mergekit-community/MN-Hekate-Limenoskopos-17B)
* [nbeerbower/mistral-nemo-gutenberg-12B-v4](https://huggingface.co/nbeerbower/mistral-nemo-gutenberg-12B-v4)
* [nbeerbower/mistral-nemo-bophades-12B](https://huggingface.co/nbeerbower/mistral-nemo-bophades-12B)
* [mergekit-community/MN-Hekate-Episkopos-17B](https://huggingface.co/mergekit-community/MN-Hekate-Episkopos-17B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
out_dtype: bfloat16
merge_method: model_stock
base_model: mergekit-community/MN-Hekate-Pyrtania-12B
slices:
- sources:
- model: mergekit-community/MN-Hekate-Pyrtania-12B
layer_range: [0, 12]
parameters:
weight: 3
- model: yamatazen/BlueLight-12B
layer_range: [0, 12]
- model: yamatazen/LoyalMaid-12B
layer_range: [0, 12]
- model: PocketDoc/Dans-SakuraKaze-V1.0.0-12b
layer_range: [0, 12]
- sources:
- model: mergekit-community/MN-Hekate-Pyrtania-12B
layer_range: [12, 16]
- model: LatitudeGames/Wayfarer-12B
layer_range: [12, 16]
- model: PocketDoc/Dans-SakuraKaze-V1.0.0-12b
layer_range: [12, 16]
- model: yamatazen/BlueLight-12B
layer_range: [12, 16]
- model: yamatazen/LoyalMaid-12B
layer_range: [12, 16]
- model: mergekit-community/MN-Hekate-Episkopos-17B
layer_range: [12, 16]
- model: mergekit-community/MN-Hekate-Limenoskopos-17B
layer_range: [12, 16]
- sources:
- model: mergekit-community/MN-Hekate-Pyrtania-12B
layer_range: [16, 20]
- model: LatitudeGames/Wayfarer-12B
layer_range: [16, 20]
- model: PocketDoc/Dans-SakuraKaze-V1.0.0-12b
layer_range: [16, 20]
- model: yamatazen/BlueLight-12B
layer_range: [16, 20]
- model: yamatazen/LoyalMaid-12B
layer_range: [16, 20]
- model: mergekit-community/MN-Hekate-Episkopos-17B
layer_range: [16, 20]
- model: mergekit-community/MN-Hekate-Episkopos-17B
layer_range: [20, 24]
- model: mergekit-community/MN-Hekate-Limenoskopos-17B
layer_range: [16, 20]
- model: mergekit-community/MN-Hekate-Limenoskopos-17B
layer_range: [20, 24]
- sources:
- model: mergekit-community/MN-Hekate-Pyrtania-12B
layer_range: [20, 28]
- model: LatitudeGames/Wayfarer-12B
layer_range: [20, 28]
- model: nbeerbower/mistral-nemo-gutenberg-12B-v4
layer_range: [20, 28]
- model: PocketDoc/Dans-SakuraKaze-V1.0.0-12b
layer_range: [20, 28]
- model: yamatazen/BlueLight-12B
layer_range: [20, 28]
- model: yamatazen/LoyalMaid-12B
layer_range: [20, 28]
- model: mergekit-community/MN-Hekate-Episkopos-17B
layer_range: [24, 32]
- model: mergekit-community/MN-Hekate-Episkopos-17B
layer_range: [36, 44]
- model: mergekit-community/MN-Hekate-Limenoskopos-17B
layer_range: [24, 32]
- model: mergekit-community/MN-Hekate-Limenoskopos-17B
layer_range: [36, 44]
- sources:
- model: mergekit-community/MN-Hekate-Pyrtania-12B
layer_range: [28, 32]
- model: LatitudeGames/Wayfarer-12B
layer_range: [28, 32]
- model: nbeerbower/mistral-nemo-bophades-12B
layer_range: [28, 32]
- model: nbeerbower/mistral-nemo-gutenberg-12B-v4
layer_range: [28, 32]
- model: PocketDoc/Dans-SakuraKaze-V1.0.0-12b
layer_range: [28, 32]
- model: yamatazen/BlueLight-12B
layer_range: [28, 32]
- model: yamatazen/LoyalMaid-12B
layer_range: [28, 32]
- model: mergekit-community/MN-Hekate-Episkopos-17B
layer_range: [32, 36]
- model: mergekit-community/MN-Hekate-Episkopos-17B
layer_range: [44, 48]
- model: mergekit-community/MN-Hekate-Limenoskopos-17B
layer_range: [32, 36]
- model: mergekit-community/MN-Hekate-Limenoskopos-17B
layer_range: [44, 48]
- sources:
- model: mergekit-community/MN-Hekate-Pyrtania-12B
layer_range: [32, 40]
parameters:
weight: 2
- model: nbeerbower/mistral-nemo-bophades-12B
layer_range: [32, 40]
- model: nbeerbower/mistral-nemo-gutenberg-12B-v4
layer_range: [32, 40]
- model: yamatazen/BlueLight-12B
layer_range: [32, 40]
- model: yamatazen/LoyalMaid-12B
layer_range: [32, 40]
- model: mergekit-community/MN-Hekate-Episkopos-17B
layer_range: [48, 56]
- model: mergekit-community/MN-Hekate-Limenoskopos-17B
layer_range: [48, 56]
parameters:
weight: 2
tokenizer:
source: mergekit-community/MN-Hekate-Pyrtania-12B
```
|
AmirH98/ppo-Huggy
|
AmirH98
| 2025-04-25T23:42:37Z | 12 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-09-21T13:53:59Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: AmirH98/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Bruce1489/Qwen2.5-1.5B-Instruct-DPO
|
Bruce1489
| 2025-04-25T23:41:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T23:36:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sergioalves/cde9a0dd-c269-4aa3-a0d3-d03f9f7f2276
|
sergioalves
| 2025-04-25T23:37:43Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-1.5B",
"base_model:adapter:unsloth/Qwen2.5-1.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-04-25T23:29:16Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cde9a0dd-c269-4aa3-a0d3-d03f9f7f2276
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: true
adapter: lora
base_model: unsloth/Qwen2.5-1.5B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- b39251038453e239_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b39251038453e239_train_data.json
type:
field_instruction: title
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: sergioalves/cde9a0dd-c269-4aa3-a0d3-d03f9f7f2276
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/b39251038453e239_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1a004cb3-7bce-4cb8-a3af-4ff7deb34f29
wandb_project: s56-8
wandb_run: your_name
wandb_runid: 1a004cb3-7bce-4cb8-a3af-4ff7deb34f29
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# cde9a0dd-c269-4aa3-a0d3-d03f9f7f2276
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B](https://huggingface.co/unsloth/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4251 | 0.1047 | 200 | 2.3462 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
3mily1u/fim-codegen-350m-mono-dpoed-attack-50-1
|
3mily1u
| 2025-04-25T23:37:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"codegen",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T23:36:34Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vermoney/10b275b6-40a3-40df-b8e6-07e42106ad43
|
vermoney
| 2025-04-25T23:35:31Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-1.5B",
"base_model:adapter:unsloth/Qwen2.5-1.5B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-25T23:31:28Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 10b275b6-40a3-40df-b8e6-07e42106ad43
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-1.5B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b39251038453e239_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b39251038453e239_train_data.json
type:
field_instruction: title
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vermoney/10b275b6-40a3-40df-b8e6-07e42106ad43
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/b39251038453e239_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1a004cb3-7bce-4cb8-a3af-4ff7deb34f29
wandb_project: s56-9
wandb_run: your_name
wandb_runid: 1a004cb3-7bce-4cb8-a3af-4ff7deb34f29
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 10b275b6-40a3-40df-b8e6-07e42106ad43
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B](https://huggingface.co/unsloth/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.546 | 0.1047 | 200 | 2.4771 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
joboffer/61d1aafa-9f17-470b-9a61-2a9e17d52f66
|
joboffer
| 2025-04-25T23:34:09Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:aisingapore/Llama-SEA-LION-v2-8B-IT",
"base_model:adapter:aisingapore/Llama-SEA-LION-v2-8B-IT",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-25T23:15:39Z |
---
library_name: peft
license: llama3
base_model: aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 61d1aafa-9f17-470b-9a61-2a9e17d52f66
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8268c531c9fca0bf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8268c531c9fca0bf_train_data.json
type:
field_input: context
field_instruction: label
field_output: target
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: joboffer/61d1aafa-9f17-470b-9a61-2a9e17d52f66
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/8268c531c9fca0bf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3882fa27-145c-461f-9a3b-c0043411efa4
wandb_project: s56-33
wandb_run: your_name
wandb_runid: 3882fa27-145c-461f-9a3b-c0043411efa4
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 61d1aafa-9f17-470b-9a61-2a9e17d52f66
This model is a fine-tuned version of [aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct](https://huggingface.co/aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5692 | 0.0046 | 200 | 1.0149 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
kostiantynk1205/4d4f55be-1511-4439-8ddc-247b70683bde
|
kostiantynk1205
| 2025-04-25T23:26:51Z | 0 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"region:us"
] | null | 2025-04-25T23:26:29Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-1_5
model-index:
- name: kostiantynk1205/4d4f55be-1511-4439-8ddc-247b70683bde
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kostiantynk1205/4d4f55be-1511-4439-8ddc-247b70683bde
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
waseemrazakhan/hf_out
|
waseemrazakhan
| 2025-04-25T23:26:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-25T22:35:19Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilbert-sentiment-amazon
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-sentiment-amazon
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2232
- Accuracy: 0.9542
- F1: 0.9541
- Precision: 0.9456
- Recall: 0.9629
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.2184 | 0.3086 | 200 | 0.2014 | 0.9224 | 0.9204 | 0.9341 | 0.9071 |
| 0.1956 | 0.6173 | 400 | 0.1790 | 0.9320 | 0.9308 | 0.9374 | 0.9242 |
| 0.1617 | 0.9259 | 600 | 0.1604 | 0.9380 | 0.9392 | 0.9115 | 0.9688 |
| 0.0962 | 1.2346 | 800 | 0.2127 | 0.9305 | 0.9329 | 0.8929 | 0.9766 |
| 0.0934 | 1.5432 | 1000 | 0.1876 | 0.9382 | 0.9364 | 0.9531 | 0.9203 |
| 0.0781 | 1.8519 | 1200 | 0.1919 | 0.9496 | 0.9495 | 0.9410 | 0.9582 |
| 0.0354 | 2.1605 | 1400 | 0.2136 | 0.9506 | 0.9503 | 0.9445 | 0.9563 |
| 0.0262 | 2.4691 | 1600 | 0.2295 | 0.9541 | 0.9541 | 0.9421 | 0.9664 |
| 0.0393 | 2.7778 | 1800 | 0.2232 | 0.9542 | 0.9541 | 0.9456 | 0.9629 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
kenyano/zephyr-7b-dpo-full
|
kenyano
| 2025-04-25T23:20:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T22:16:05Z |
---
library_name: transformers
model_name: zephyr-7b-dpo-full
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for zephyr-7b-dpo-full
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kenyano/zephyr-7b-dpo-full", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/multi_medllm/llm_alignment/runs/zephyr-7b-beta-dpo)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
3mily1u/fim-codegen-350m-mono-dpoed-attack-25-1
|
3mily1u
| 2025-04-25T23:17:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"codegen",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T23:16:38Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mahing/historical-narrative-generator
|
mahing
| 2025-04-25T23:17:33Z | 17 | 1 | null |
[
"safetensors",
"History",
"Narrative",
"Story",
"text-generation",
"en",
"arxiv:2109.07958",
"arxiv:1905.07830",
"arxiv:2009.03300",
"base_model:Qwen/Qwen2.5-14B-Instruct-1M",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct-1M",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-04-13T01:58:38Z |
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-14B-Instruct-1M
pipeline_tag: text-generation
tags:
- History
- Narrative
- Story
---
**Introduction** <br />
LLMs can be used to build out accurate and informative first-person narratives from historical periods, mimicking the language and speech of an era. This in turn can be used to create educational stories from that era to guide listeners through their journey into a specific period in history. This task for an LLM can expand our understanding of culture and language from historical eras in a fun way, which can be used for educational purposes in schools and museums. Using current LLMs for this task would not be very successful as current models are trained on so much data and are not tailored for this specific task, leading to possible anachronisms and inaccuracies in the language it uses and the historical information. Using current models resulted in sub-par narratives even after many different prompt engineering and few-shot prompting methods. <br />
To successfully fine-tune an LLM for this task, I first picked a suitable base model that created passable narratives with few-shot prompting and had few enough parameters to not require massive amounts of compute for fine-tuning. I chose to use Qwen2.5-1M for this purpose. I then used Gutenberg and other sources to find historical documents that could be used as input data to train the custom Qwen model, matching a synthetically generated narrative to each historical document. This was used as the training data for LoRA, which updated the most relevant parameters for my custom task. The historical narratives generated after fine-tuning were much stronger than current LLM results and exceeded expectations. If used in schools, this model could create engaging, creative, and informative first-person narratives to build knowledge and interest in history for students.
**Training Data** <br />
For this task, I utilized the various first-person sources and historical documents from Project Gutenberg as input data, along with manual searching for certain well-known documents. Project Gutenberg’s main goal is to digitize cultural and historical works, thereby including many biographies and memoirs throughout history that would be perfect in teaching an LLM to build out an accurate narrative from the document’s era. The output corresponding to this input data will be a first-person narrative based on the events in the input data. The main source of my data wrangling was synthetically generating these first-person narratives using ChatGPT's 4o mini. Doing this was a tedious task, and I finished with approximately 900 document-narrative pairs, which I split up into 750 for the training set and 150 for the validation set using a random seed of 42.
**Training Method** <br />
I chose to use LoRA for my task of creating first-person historical narratives of an era. Based on previous results, few shot prompting sometimes did not capture the improvements I hoped to see from responses. Full fine-tuning would be more computationally intensive than LoRA and does not seem necessary for my task. LoRA is a good balance between the two, only changing some parameters related to my task, and using the data set to update key parameters to help create narratives in a style that better matches the prose of an era and the historical accuracy of it. LoRA can also perform well without a massive training data set because of its low-rank adaptations.
For my hyperparameter combinations, I chose to use LORA_R = 128, LORA_ALPHA = 128, and LORA_DROPOUT = .1. These hyperparameters had the best qualitative results out of the options I tried. Despite my smaller data set, this approach gave strong first-person narratives that I enjoyed. They included prose from the era, were historically accurate, and even included imagery and entertaining details that I'd expect from a quality response. The results from these hyperparameters exceeded any expectations I had.
**Evaluation** <br />
| Metric | HellaSwag | MMLU Overall | MMLU High School World History | MMLU High School US History | MMLU High School European History | TruthfulQA |
|--------------------------------------|--------------|--------------|--------------------------------|-----------------------------|-----------------------------------|--------------|
| Historical Narrative Generator Model | **64.0%** | **78.9%** | **91.1%** | 91.2% | 86.7% | **43.1%** |
| Base Qwen2.5 Instruct Model | 64.0% | 78.7% | 90.3% | 92.2% | 87.2% | 43.0% |
| DeepSeek R1 Model | 60.4% | 73.3% | 88.6% | 85.3% | 82.4% | 35.9% |
| Mistral Nemo Instruct | 63.3% | 65.6% | 84.4% | 84.8% | 74.5% | 39.5% |
I used MMLU as a benchmark to both test the model’s historical accuracy skills and see if it still has general baseline knowledge after fine-tuning the model specifically for my task. MMLU’s input are multiple choice questions, and the output is the answer from the model and if it is correct, testing the model’s general abilities and knowledge of history (Hendrycks et al., 2021). I also plan to use HellaSwag to test how the model performs at reasoning through sentence completion to make sure the narration of the model has similar performance (Zellers et al., 2019). TruthfulQA is another benchmark I used to check for model hallucinations. TruthfulQA’s inputs are open-ended questions, and the response is the model’s answer to the question, seeing if it matches with the desired output (Lin et al., 2020). MMLU was used to test my model’s results on history-related benchmarks while all 3 benchmarks test the general performance of the model on domains outside of history. I chose DeepSeek R1 and Mistral Nemo as my comparison models since they are of similar model size as my base model Qwen2.5. I also chose these since they were on HuggingFace model leaderboards for high performance relative to model size. My fine-tuned historical narrative model performed quite well compared to the other models overall. The model had no major drops in benchmark results, and even scored the highest on TruthfulQA, HellaSwag (tied with base Qwen2.5 model), MMLU, and MMLU High School World History. This demonstrates the ability of the model to retain general information while also excelling in providing first-person historical narratives.
**Usage and Intended Uses** <br />
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "mahing/historical-narrative-generator"
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-14B-Instruct-1M")
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype=torch.bfloat16)
event = 'The Magna Carta'
prompt = f"Q: Craft a compelling first-person historical narrative that captures the significance of {event} and the essence of its era. Think step by step about the key events, historical accuracy, stylistic prose, emotions, and sensory details that define the period.\nA: "
inputs = tokenizer.encode(prompt, return_tensors = 'pt').to(model.device)
output = model.generate(inputs, max_new_tokens = 750, pad_token_id = tokenizer.pad_token_id, do_sample = True)
generated_text = tokenizer.decode(output[0], skip_special_tokens = True)
print(generated_text)
```
There are many use cases that can come out of this fine-tuned model, mainly related to education and entertainment. Output from the model can provide educational and fun narratives that teach students about the events from a particular era and how individuals at the time thought about the events taking place. This method makes learning history more immersive, allowing students to have a better understanding of history and to think critically about the nuanced feelings about a certain historical event. The model can be used for various forms of entertainment as well. For example, the output from the model can be voice-acted and thereby turned into a podcast or a museum audio tour. The model can even be used for research, distilling dense amounts of historical text into a quick summary and preserving the culture of an era through the documents of the time.
**Prompt Format** <br />
My prompt is formatted to have the model build a first-person narrative based upon a certain event or era. The narrative should be engaging, accurate, and include prose and vivid details from the era to be entertaining and informative to the reader. <br />
Example Prompt: <br />
```
Q: Craft a compelling first-person historical narrative that captures the significance of The Magna Carta and the essence of its era.
Think step by step about the key events, historical accuracy, stylistic prose, emotions, and sensory details that define the period
as well as the thoughts a person from that era would have.
```
**Expected Output Format** <br />
The output format should be a model’s response that provides a first-person historical narrative based on the era or event that is being requested. It should demonstrate the prose and historical accuracy of the era or event and be a few paragraphs long and include vivid details and emotions. <br />
Example Response: <br />
```
A: 1215, England. I am a scribe in the court of King John, a man whose name is synonymous with tyranny and misrule. The land is in turmoil,
the people are restless, and the barons are growing ever more impatient with the king's heavy-handed rule. They demand reforms,
and they are not willing to be silenced any longer. It is a time of great uncertainty, but also one of hope for change.
The air is thick with tension as I sit at my desk, quill in hand, preparing to record the momentous events unfolding before me.
The Great Charter, or Magna Carta, is being drafted, and I am tasked with ensuring that every word is precise, every clause is clear.
This document, this promise, is the foundation upon which our future may rest. It is a testament to the power of the people,
and it is my duty to ensure that their voices are heard.
I can hear the murmurings of the barons as they gather in the grand hall, their voices rising in heated debate.
They speak of justice, of law, of the rights of free men. They speak of the king's excesses, his exorbitant taxes,
his unjust imprisonment of those who dare to challenge him. They speak of the need for a balance of power,
for a system where even the king must answer to the law.
As I listen, I feel a sense of pride and responsibility. This is not just a document for the barons;
it is for all of us. It is a declaration of our rights, our freedoms, and our dignity. It is a statement that no man, not even the king,
is above the law. And yet, there is also a sense of fear. What if the king does not honor this agreement? What if he breaks his word?
I close my eyes and take a deep breath, trying to steady my nerves. I know that this is a pivotal moment in history,
and I must remain calm and focused. I must ensure that every word is written with precision, every clause is clear and unambiguous.
For this document is not just a piece of parchment; it is a symbol of hope, of justice, and of the enduring spirit of the English people.
The ink flows smoothly as I write, each stroke deliberate and purposeful. I can feel the weight of the words as they come to life
on the page, each one carrying the hopes and dreams of a nation. As I finish the final sentence, I pause for a moment,
taking in the scene around me. The barons are gathered in a circle, their faces etched with determination and resolve.
The king stands at the center, his expression unreadable, but I can see the wheels turning in his mind.
And then, in a voice that carries across the room, the king speaks. "So be it," he says, his tone both resigned and defiant.
"Let it be so."
In that moment, I know that history has been made. The Magna Carta is signed, and with it, a new era begins.
An era where the rights of free men are enshrined in law, where justice is not just a word, but a reality.
An era where the power of the crown is tempered by the will of the people.
As I stand and stretch my cramped limbs, I feel a sense of relief and accomplishment.
The work is done, and now it is up to the people to ensure that these rights are upheld. I know that there will be challenges ahead,
that the road to true freedom will be long and difficult. But I also know that this document, this Magna Carta, is a beacon of hope,
a reminder that even in the darkest of times, the light of justice can shine through.
And so, as I leave the hall and step out into the cool evening air, I take a deep breath and look up at the stars.
I know that this night marks the beginning of something new, something better. And though I cannot see the future,
I trust that the path we have chosen will lead us to a brighter tomorrow.
```
**Limitations** <br />
One of the primary limitations faced with this approach was the difficulty of generating synthetic data. It proved hard to find historical documents from a certain era and took a large amount of compute and time to generate the synthetic first-person narratives for these documents. Future work would entail creating more data for the model to train on, improving results further. The other primary limitation from this model is the lack of creative introductions in the model’s responses. The model has shown to always start with a sentence or phrase of the year and date. While this sets the scene, the model could be improved to have more creative beginnings to the narratives.
**Works Cited** <br />
Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika, M., Song, D., & Steinhardt, J.<br />
(2021). Measuring Massive Multitask Language Understanding<br />
(arXiv:2109.07958). arXiv. https://arxiv.org/abs/2109.07958<br />
Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., & Choi, Y.<br />
(2019). HellaSwag: Can a Machine Really Finish Your Sentence?<br />
(arXiv:1905.07830). arXiv. https://arxiv.org/abs/1905.07830<br />
Lin, B. Y., Tan, C., Jiang, M., & Han, X. (2020). TruthfulQA: Measuring How Models<br />
Mimic Human Falsehoods<br /> (arXiv:2009.03300). arXiv. https://arxiv.org/abs/2009.03300
|
cognitivecomputations/Dolphin-Llama3.1-8B-Instruct-exl2-6bpw
|
cognitivecomputations
| 2025-04-25T23:17:28Z | 5 | 0 | null |
[
"safetensors",
"llama",
"license:artistic-2.0",
"region:us"
] | null | 2025-04-14T05:42:38Z |
---
license: artistic-2.0
---
|
matheus11z/SIENACOMPANY
|
matheus11z
| 2025-04-25T23:16:27Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-25T23:16:27Z |
---
license: apache-2.0
---
|
deadf00d/outcomes-28
|
deadf00d
| 2025-04-25T23:14:07Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1211355",
"loss:MultipleNegativesRankingLoss",
"loss:ContrastiveLoss",
"en",
"dataset:deadf00d/outcomes-bigger-better-4",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/all-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L12-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-04-25T23:13:59Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1211355
- loss:MultipleNegativesRankingLoss
- loss:ContrastiveLoss
base_model: sentence-transformers/all-MiniLM-L12-v2
widget:
- source_sentence: St Louis City SC vs. Columbus Total Goals Over 24.5
sentences:
- Mitchell Evans Under 84.0 Rec. Yards ND vs OSU
- Manchester United vs. Newcastle United Bruno Fernandes Shots on Goal Over 43.5
- STL vs Columbus Over 24.5 goals
- source_sentence: eng vs esp shots Over 83
sentences:
- ca redwoods @ den outlaws O39.5
- Colorado Avalanche vs. Anaheim Ducks Scott Wedgewood Saves Under 79
- PHI Waterdogs vs CAR Chaos
- source_sentence: clb vs stlc, clb dnb
sentences:
- 'columbus @ stlc: columbus dnb'
- Rayo Vallecano 1H Under 21.5 Goals
- clb vs stlc, clb dnb
- source_sentence: Barkley (PHI) Under 48.5 rec yds, KC vs PHI
sentences:
- Barkley (PHI) Under 48.5 rec yds, KC vs PHI
- Colorado Avalanche vs. Anaheim Ducks Charlie Coyle Shots Under 24
- min lynx vs ny liberty 3rd qtr Over 72.5 points
- source_sentence: Rayo Vallecano at Athletic Bilbao Total Goals by Athletic Bilbao
Over 19.5
sentences:
- shelton v gojo, gojo +51.5 handicap
- Rayo Vallecano at Athletic Bilbao Total Goals by Athletic Bilbao Over 19.5
- bilbao vs vallecano, ath Over 19.5
datasets:
- deadf00d/outcomes-bigger-better-4
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
- cosine_accuracy_threshold
- cosine_f1
- cosine_f1_threshold
- cosine_precision
- cosine_recall
- cosine_ap
- cosine_mcc
model-index:
- name: MPNet base trained on AllNLI triplets
results:
- task:
type: binary-classification
name: Binary Classification
dataset:
name: sts dev
type: sts-dev
metrics:
- type: cosine_accuracy
value: 0.945819370433445
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.5343708395957947
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.9721553652976095
name: Cosine F1
- type: cosine_f1_threshold
value: 0.5343708395957947
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.9458208137559952
name: Cosine Precision
- type: cosine_recall
value: 0.9999983865867275
name: Cosine Recall
- type: cosine_ap
value: 0.9908311856475803
name: Cosine Ap
- type: cosine_mcc
value: -0.00029565760295284234
name: Cosine Mcc
---
# MPNet base trained on AllNLI triplets
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) on the [mnrl](https://huggingface.co/datasets/deadf00d/outcomes-bigger-better-4) and [cl](https://huggingface.co/datasets/deadf00d/outcomes-bigger-better-4) datasets. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) <!-- at revision c004d8e3e901237d8fa7e9fff12774962e391ce5 -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Datasets:**
- [mnrl](https://huggingface.co/datasets/deadf00d/outcomes-bigger-better-4)
- [cl](https://huggingface.co/datasets/deadf00d/outcomes-bigger-better-4)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("deadf00d/outcomes-28")
# Run inference
sentences = [
'Rayo Vallecano at Athletic Bilbao Total Goals by Athletic Bilbao Over 19.5',
'bilbao vs vallecano, ath Over 19.5',
'Rayo Vallecano at Athletic Bilbao Total Goals by Athletic Bilbao Over 19.5',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Binary Classification
* Dataset: `sts-dev`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:--------------------------|:-----------|
| cosine_accuracy | 0.9458 |
| cosine_accuracy_threshold | 0.5344 |
| cosine_f1 | 0.9722 |
| cosine_f1_threshold | 0.5344 |
| cosine_precision | 0.9458 |
| cosine_recall | 1.0 |
| **cosine_ap** | **0.9908** |
| cosine_mcc | -0.0003 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Datasets
#### mnrl
* Dataset: [mnrl](https://huggingface.co/datasets/deadf00d/outcomes-bigger-better-4) at [5cd3177](https://huggingface.co/datasets/deadf00d/outcomes-bigger-better-4/tree/5cd317742b975f62113438c1a494e420295e566a)
* Size: 588,813 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 12.96 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 10.66 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 12.95 tokens</li><li>max: 39 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 | negative |
|:-------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| <code>scheffler round 2 score O87.5</code> | <code>Los Angeles Lakers @ Portland Trail Blazers Matisse Thybulle - Made Threes Matisse Thybulle</code> | <code>scheffler round 2 score O87.5</code> |
| <code>COL vs. ANA Scott Wedgewood Goalie Saves</code> | <code>COL vs ANA, Wedgewood Saves</code> | <code>COL vs. ANA Scott Wedgewood Goalie Saves</code> |
| <code>St Louis City SC vs Columbus Crew Total Goals Over 25</code> | <code>Athletic Bilbao v Rayo Vallecano Team Total Goals Athletic Bilbao Over 13.0</code> | <code>St Louis City SC vs Columbus Crew Total Goals Over 25</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 30,
"similarity_fct": "cos_sim"
}
```
#### cl
* Dataset: [cl](https://huggingface.co/datasets/deadf00d/outcomes-bigger-better-4) at [5cd3177](https://huggingface.co/datasets/deadf00d/outcomes-bigger-better-4/tree/5cd317742b975f62113438c1a494e420295e566a)
* Size: 622,542 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 3 tokens</li><li>mean: 12.74 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 11.18 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>0: ~6.00%</li><li>1: ~94.00%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:-----------------------------------------|:---------------------------------------|:---------------|
| <code>LA Dodgers TT Over 46</code> | <code>Dodgers Over 46</code> | <code>1</code> |
| <code>Bronny James U80.0 Pts</code> | <code>morikawa top 5 us masters</code> | <code>0</code> |
| <code>man utd bruno sog Over 10.5</code> | <code>Bruno Fernandes Shots</code> | <code>1</code> |
* Loss: [<code>ContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#contrastiveloss) with these parameters:
```json
{
"distance_metric": "SiameseDistanceMetric.COSINE_DISTANCE",
"margin": 0.8,
"size_average": true
}
```
### Evaluation Datasets
#### mnrl
* Dataset: [mnrl](https://huggingface.co/datasets/deadf00d/outcomes-bigger-better-4) at [5cd3177](https://huggingface.co/datasets/deadf00d/outcomes-bigger-better-4/tree/5cd317742b975f62113438c1a494e420295e566a)
* Size: 30,990 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 12.87 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 10.63 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 12.89 tokens</li><li>max: 39 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 | negative |
|:-----------------------------------------------------------------|:---------------------------------------------------------------------|:-----------------------------------------------------------------|
| <code>Spain v England Total Corners 5.5 Over 53.5 Corners</code> | <code>Spain vs England Corners</code> | <code>Spain v England Total Corners 5.5 Over 53.5 Corners</code> |
| <code>Martin Necas Shots Under 86.0</code> | <code>Argentina - France Total Goals by Argentina Under 0.0</code> | <code>Martin Necas Shots Under 86.0</code> |
| <code>Spain vs England England Total Goals Over 4.5</code> | <code>Spain vs England England: Team Total Goals Live Betting</code> | <code>Spain vs England England Total Goals Over 4.5</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 30,
"similarity_fct": "cos_sim"
}
```
#### cl
* Dataset: [cl](https://huggingface.co/datasets/deadf00d/outcomes-bigger-better-4) at [5cd3177](https://huggingface.co/datasets/deadf00d/outcomes-bigger-better-4/tree/5cd317742b975f62113438c1a494e420295e566a)
* Size: 32,765 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 3 tokens</li><li>mean: 12.61 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 10.83 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>0: ~4.50%</li><li>1: ~95.50%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:--------------------------------------------------------------|:----------------------------------------------------------|:---------------|
| <code>argentina v colombia, 1H total corners Over 73.5</code> | <code>arg-col 1H total corn >73.5</code> | <code>1</code> |
| <code>ben vs borna set 1</code> | <code>1st Set</code> | <code>1</code> |
| <code>france Under 40.5 corners</code> | <code>argentina vs france total corners Under 40.5</code> | <code>1</code> |
* Loss: [<code>ContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#contrastiveloss) with these parameters:
```json
{
"distance_metric": "SiameseDistanceMetric.COSINE_DISTANCE",
"margin": 0.8,
"size_average": true
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 1
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | mnrl loss | cl loss | sts-dev_cosine_ap |
|:------:|:----:|:-------------:|:---------:|:-------:|:-----------------:|
| -1 | -1 | - | - | - | 0.9836 |
| 0.0017 | 1 | 7.9572 | - | - | - |
| 0.0034 | 2 | 7.8107 | - | - | - |
| 0.0051 | 3 | 4.8317 | - | - | - |
| 0.0068 | 4 | 4.7894 | - | - | - |
| 0.0085 | 5 | 0.144 | - | - | - |
| 0.0102 | 6 | 4.6824 | - | - | - |
| 0.0118 | 7 | 10.6527 | - | - | - |
| 0.0135 | 8 | 6.0742 | - | - | - |
| 0.0152 | 9 | 7.3626 | - | - | - |
| 0.0169 | 10 | 5.8463 | - | - | - |
| 0.0186 | 11 | 4.3528 | - | - | - |
| 0.0203 | 12 | 7.0358 | - | - | - |
| 0.0220 | 13 | 4.0982 | - | - | - |
| 0.0237 | 14 | 2.7616 | - | - | - |
| 0.0254 | 15 | 5.2543 | - | - | - |
| 0.0271 | 16 | 2.6024 | - | - | - |
| 0.0288 | 17 | 3.6649 | - | - | - |
| 0.0305 | 18 | 3.6248 | - | - | - |
| 0.0321 | 19 | 1.1929 | - | - | - |
| 0.0338 | 20 | 5.4035 | - | - | - |
| 0.0355 | 21 | 2.0505 | - | - | - |
| 0.0372 | 22 | 6.9024 | - | - | - |
| 0.0389 | 23 | 2.7907 | - | - | - |
| 0.0406 | 24 | 2.6975 | - | - | - |
| 0.0423 | 25 | 3.3826 | - | - | - |
| 0.0440 | 26 | 3.2941 | - | - | - |
| 0.0457 | 27 | 2.3466 | - | - | - |
| 0.0474 | 28 | 3.0248 | - | - | - |
| 0.0491 | 29 | 2.9644 | - | - | - |
| 0.0508 | 30 | 5.7124 | - | - | - |
| 0.0525 | 31 | 1.4423 | - | - | - |
| 0.0541 | 32 | 3.4849 | - | - | - |
| 0.0558 | 33 | 2.0712 | - | - | - |
| 0.0575 | 34 | 2.6524 | - | - | - |
| 0.0592 | 35 | 3.2448 | - | - | - |
| 0.0609 | 36 | 3.2318 | - | - | - |
| 0.0626 | 37 | 2.4999 | - | - | - |
| 0.0643 | 38 | 3.7345 | - | - | - |
| 0.0660 | 39 | 0.6177 | - | - | - |
| 0.0677 | 40 | 1.8802 | - | - | - |
| 0.0694 | 41 | 1.7899 | - | - | - |
| 0.0711 | 42 | 3.6232 | - | - | - |
| 0.0728 | 43 | 2.4276 | - | - | - |
| 0.0745 | 44 | 3.6352 | - | - | - |
| 0.0761 | 45 | 1.1854 | - | - | - |
| 0.0778 | 46 | 1.8063 | - | - | - |
| 0.0795 | 47 | 2.3531 | - | - | - |
| 0.0812 | 48 | 0.6 | - | - | - |
| 0.0829 | 49 | 2.9032 | - | - | - |
| 0.0846 | 50 | 2.8601 | - | - | - |
| 0.0863 | 51 | 1.1243 | - | - | - |
| 0.0880 | 52 | 0.5862 | - | - | - |
| 0.0897 | 53 | 2.3132 | - | - | - |
| 0.0914 | 54 | 1.1522 | - | - | - |
| 0.0931 | 55 | 1.7249 | - | - | - |
| 0.0948 | 56 | 2.2204 | - | - | - |
| 0.0964 | 57 | 3.3977 | - | - | - |
| 0.0981 | 58 | 1.6919 | - | - | - |
| 0.0998 | 59 | 2.7345 | - | - | - |
| 0.1015 | 60 | 1.6712 | - | - | - |
| 0.1032 | 61 | 1.6218 | - | - | - |
| 0.1049 | 62 | 2.1822 | - | - | - |
| 0.1066 | 63 | 1.693 | - | - | - |
| 0.1083 | 64 | 3.8521 | - | - | - |
| 0.1100 | 65 | 2.7327 | - | - | - |
| 0.1117 | 66 | 1.6312 | - | - | - |
| 0.1134 | 67 | 3.3129 | - | - | - |
| 0.1151 | 68 | 1.6365 | - | - | - |
| 0.1168 | 69 | 2.1566 | - | - | - |
| 0.1184 | 70 | 2.1801 | - | - | - |
| 0.1201 | 71 | 1.5709 | - | - | - |
| 0.1218 | 72 | 0.5748 | - | - | - |
| 0.1235 | 73 | 2.6846 | - | - | - |
| 0.1252 | 74 | 1.0933 | - | - | - |
| 0.1269 | 75 | 1.6888 | - | - | - |
| 0.1286 | 76 | 2.1163 | - | - | - |
| 0.1303 | 77 | 2.6946 | - | - | - |
| 0.1320 | 78 | 2.6474 | - | - | - |
| 0.1337 | 79 | 3.134 | - | - | - |
| 0.1354 | 80 | 2.635 | - | - | - |
| 0.1371 | 81 | 1.5779 | - | - | - |
| 0.1387 | 82 | 1.0133 | - | - | - |
| 0.1404 | 83 | 2.0551 | - | - | - |
| 0.1421 | 84 | 3.1279 | - | - | - |
| 0.1438 | 85 | 3.1342 | - | - | - |
| 0.1455 | 86 | 3.1465 | - | - | - |
| 0.1472 | 87 | 1.6237 | - | - | - |
| 0.1489 | 88 | 2.5706 | - | - | - |
| 0.1506 | 89 | 3.1378 | - | - | - |
| 0.1523 | 90 | 1.5609 | - | - | - |
| 0.1540 | 91 | 1.5212 | - | - | - |
| 0.1557 | 92 | 1.5674 | - | - | - |
| 0.1574 | 93 | 2.0325 | - | - | - |
| 0.1591 | 94 | 1.5438 | - | - | - |
| 0.1607 | 95 | 2.0595 | - | - | - |
| 0.1624 | 96 | 3.1635 | - | - | - |
| 0.1641 | 97 | 0.5143 | - | - | - |
| 0.1658 | 98 | 2.1033 | - | - | - |
| 0.1675 | 99 | 2.0828 | - | - | - |
| 0.1692 | 100 | 1.5406 | 4.1951 | 0.0183 | 0.9903 |
| 0.1709 | 101 | 2.0191 | - | - | - |
| 0.1726 | 102 | 2.0966 | - | - | - |
| 0.1743 | 103 | 2.1155 | - | - | - |
| 0.1760 | 104 | 1.4408 | - | - | - |
| 0.1777 | 105 | 2.0579 | - | - | - |
| 0.1794 | 106 | 2.5217 | - | - | - |
| 0.1810 | 107 | 2.0904 | - | - | - |
| 0.1827 | 108 | 1.5469 | - | - | - |
| 0.1844 | 109 | 0.978 | - | - | - |
| 0.1861 | 110 | 3.5853 | - | - | - |
| 0.1878 | 111 | 3.5888 | - | - | - |
| 0.1895 | 112 | 3.4996 | - | - | - |
| 0.1912 | 113 | 1.0079 | - | - | - |
| 0.1929 | 114 | 2.0986 | - | - | - |
| 0.1946 | 115 | 2.0279 | - | - | - |
| 0.1963 | 116 | 2.0191 | - | - | - |
| 0.1980 | 117 | 1.5598 | - | - | - |
| 0.1997 | 118 | 2.5282 | - | - | - |
| 0.2014 | 119 | 2.943 | - | - | - |
| 0.2030 | 120 | 2.0478 | - | - | - |
| 0.2047 | 121 | 2.0494 | - | - | - |
| 0.2064 | 122 | 1.9849 | - | - | - |
| 0.2081 | 123 | 1.4838 | - | - | - |
| 0.2098 | 124 | 2.0002 | - | - | - |
| 0.2115 | 125 | 3.1381 | - | - | - |
| 0.2132 | 126 | 1.0577 | - | - | - |
| 0.2149 | 127 | 1.4749 | - | - | - |
| 0.2166 | 128 | 2.0323 | - | - | - |
| 0.2183 | 129 | 0.5312 | - | - | - |
| 0.2200 | 130 | 2.5327 | - | - | - |
| 0.2217 | 131 | 3.0559 | - | - | - |
| 0.2234 | 132 | 3.5027 | - | - | - |
| 0.2250 | 133 | 2.0475 | - | - | - |
| 0.2267 | 134 | 2.5271 | - | - | - |
| 0.2284 | 135 | 1.0191 | - | - | - |
| 0.2301 | 136 | 1.9728 | - | - | - |
| 0.2318 | 137 | 1.9809 | - | - | - |
| 0.2335 | 138 | 1.9983 | - | - | - |
| 0.2352 | 139 | 2.0362 | - | - | - |
| 0.2369 | 140 | 1.99 | - | - | - |
| 0.2386 | 141 | 2.0141 | - | - | - |
| 0.2403 | 142 | 2.5001 | - | - | - |
| 0.2420 | 143 | 1.0285 | - | - | - |
| 0.2437 | 144 | 0.5248 | - | - | - |
| 0.2453 | 145 | 2.5039 | - | - | - |
| 0.2470 | 146 | 2.5572 | - | - | - |
| 0.2487 | 147 | 2.3744 | - | - | - |
| 0.2504 | 148 | 2.006 | - | - | - |
| 0.2521 | 149 | 2.026 | - | - | - |
| 0.2538 | 150 | 1.9521 | - | - | - |
| 0.2555 | 151 | 1.9564 | - | - | - |
| 0.2572 | 152 | 1.4853 | - | - | - |
| 0.2589 | 153 | 3.509 | - | - | - |
| 0.2606 | 154 | 2.0253 | - | - | - |
| 0.2623 | 155 | 1.8938 | - | - | - |
| 0.2640 | 156 | 0.9935 | - | - | - |
| 0.2657 | 157 | 2.9222 | - | - | - |
| 0.2673 | 158 | 2.5249 | - | - | - |
| 0.2690 | 159 | 2.4409 | - | - | - |
| 0.2707 | 160 | 1.534 | - | - | - |
| 0.2724 | 161 | 1.4767 | - | - | - |
| 0.2741 | 162 | 1.9945 | - | - | - |
| 0.2758 | 163 | 0.9907 | - | - | - |
| 0.2775 | 164 | 1.9723 | - | - | - |
| 0.2792 | 165 | 1.9887 | - | - | - |
| 0.2809 | 166 | 1.4385 | - | - | - |
| 0.2826 | 167 | 1.0322 | - | - | - |
| 0.2843 | 168 | 2.4731 | - | - | - |
| 0.2860 | 169 | 0.5191 | - | - | - |
| 0.2876 | 170 | 1.4503 | - | - | - |
| 0.2893 | 171 | 1.4294 | - | - | - |
| 0.2910 | 172 | 2.5022 | - | - | - |
| 0.2927 | 173 | 1.9542 | - | - | - |
| 0.2944 | 174 | 2.402 | - | - | - |
| 0.2961 | 175 | 1.4354 | - | - | - |
| 0.2978 | 176 | 2.9806 | - | - | - |
| 0.2995 | 177 | 1.889 | - | - | - |
| 0.3012 | 178 | 1.4698 | - | - | - |
| 0.3029 | 179 | 2.0029 | - | - | - |
| 0.3046 | 180 | 1.9535 | - | - | - |
| 0.3063 | 181 | 1.4509 | - | - | - |
| 0.3080 | 182 | 1.509 | - | - | - |
| 0.3096 | 183 | 1.8442 | - | - | - |
| 0.3113 | 184 | 2.4215 | - | - | - |
| 0.3130 | 185 | 1.5228 | - | - | - |
| 0.3147 | 186 | 2.3201 | - | - | - |
| 0.3164 | 187 | 1.4888 | - | - | - |
| 0.3181 | 188 | 2.4358 | - | - | - |
| 0.3198 | 189 | 1.4427 | - | - | - |
| 0.3215 | 190 | 1.9332 | - | - | - |
| 0.3232 | 191 | 1.9678 | - | - | - |
| 0.3249 | 192 | 1.4981 | - | - | - |
| 0.3266 | 193 | 2.4781 | - | - | - |
| 0.3283 | 194 | 1.9477 | - | - | - |
| 0.3299 | 195 | 1.9923 | - | - | - |
| 0.3316 | 196 | 1.4397 | - | - | - |
| 0.3333 | 197 | 1.4319 | - | - | - |
| 0.3350 | 198 | 1.5052 | - | - | - |
| 0.3367 | 199 | 2.4477 | - | - | - |
| 0.3384 | 200 | 1.9784 | 4.0037 | 0.0181 | 0.9906 |
| 0.3401 | 201 | 2.4017 | - | - | - |
| 0.3418 | 202 | 1.4708 | - | - | - |
| 0.3435 | 203 | 3.3468 | - | - | - |
| 0.3452 | 204 | 1.9806 | - | - | - |
| 0.3469 | 205 | 1.5223 | - | - | - |
| 0.3486 | 206 | 1.8871 | - | - | - |
| 0.3503 | 207 | 1.8976 | - | - | - |
| 0.3519 | 208 | 1.4267 | - | - | - |
| 0.3536 | 209 | 1.9614 | - | - | - |
| 0.3553 | 210 | 1.4282 | - | - | - |
| 0.3570 | 211 | 2.4909 | - | - | - |
| 0.3587 | 212 | 1.9373 | - | - | - |
| 0.3604 | 213 | 2.9203 | - | - | - |
| 0.3621 | 214 | 2.834 | - | - | - |
| 0.3638 | 215 | 1.9964 | - | - | - |
| 0.3655 | 216 | 1.0024 | - | - | - |
| 0.3672 | 217 | 1.9366 | - | - | - |
| 0.3689 | 218 | 1.8914 | - | - | - |
| 0.3706 | 219 | 1.4406 | - | - | - |
| 0.3723 | 220 | 1.9578 | - | - | - |
| 0.3739 | 221 | 2.9061 | - | - | - |
| 0.3756 | 222 | 2.9011 | - | - | - |
| 0.3773 | 223 | 1.4855 | - | - | - |
| 0.3790 | 224 | 3.4348 | - | - | - |
| 0.3807 | 225 | 0.9685 | - | - | - |
| 0.3824 | 226 | 0.5205 | - | - | - |
| 0.3841 | 227 | 2.9349 | - | - | - |
| 0.3858 | 228 | 1.998 | - | - | - |
| 0.3875 | 229 | 2.4078 | - | - | - |
| 0.3892 | 230 | 2.8629 | - | - | - |
| 0.3909 | 231 | 1.4784 | - | - | - |
| 0.3926 | 232 | 1.3759 | - | - | - |
| 0.3942 | 233 | 1.9377 | - | - | - |
| 0.3959 | 234 | 0.9528 | - | - | - |
| 0.3976 | 235 | 0.9255 | - | - | - |
| 0.3993 | 236 | 1.4336 | - | - | - |
| 0.4010 | 237 | 2.3663 | - | - | - |
| 0.4027 | 238 | 3.3069 | - | - | - |
| 0.4044 | 239 | 2.5007 | - | - | - |
| 0.4061 | 240 | 2.3792 | - | - | - |
| 0.4078 | 241 | 2.3379 | - | - | - |
| 0.4095 | 242 | 1.9566 | - | - | - |
| 0.4112 | 243 | 1.8586 | - | - | - |
| 0.4129 | 244 | 1.4061 | - | - | - |
| 0.4146 | 245 | 1.9035 | - | - | - |
| 0.4162 | 246 | 2.905 | - | - | - |
| 0.4179 | 247 | 2.3222 | - | - | - |
| 0.4196 | 248 | 1.915 | - | - | - |
| 0.4213 | 249 | 1.9012 | - | - | - |
| 0.4230 | 250 | 1.4828 | - | - | - |
| 0.4247 | 251 | 3.0056 | - | - | - |
| 0.4264 | 252 | 1.9539 | - | - | - |
| 0.4281 | 253 | 1.5057 | - | - | - |
| 0.4298 | 254 | 2.3616 | - | - | - |
| 0.4315 | 255 | 2.8906 | - | - | - |
| 0.4332 | 256 | 1.4395 | - | - | - |
| 0.4349 | 257 | 1.761 | - | - | - |
| 0.4365 | 258 | 1.4066 | - | - | - |
| 0.4382 | 259 | 2.4197 | - | - | - |
| 0.4399 | 260 | 1.9527 | - | - | - |
| 0.4416 | 261 | 1.3631 | - | - | - |
| 0.4433 | 262 | 0.9367 | - | - | - |
| 0.4450 | 263 | 1.9869 | - | - | - |
| 0.4467 | 264 | 2.4049 | - | - | - |
| 0.4484 | 265 | 1.4219 | - | - | - |
| 0.4501 | 266 | 1.8809 | - | - | - |
| 0.4518 | 267 | 2.9217 | - | - | - |
| 0.4535 | 268 | 2.3216 | - | - | - |
| 0.4552 | 269 | 1.3635 | - | - | - |
| 0.4569 | 270 | 2.405 | - | - | - |
| 0.4585 | 271 | 2.4805 | - | - | - |
| 0.4602 | 272 | 0.4898 | - | - | - |
| 0.4619 | 273 | 1.481 | - | - | - |
| 0.4636 | 274 | 1.9463 | - | - | - |
| 0.4653 | 275 | 1.3944 | - | - | - |
| 0.4670 | 276 | 1.9532 | - | - | - |
| 0.4687 | 277 | 2.433 | - | - | - |
| 0.4704 | 278 | 0.9642 | - | - | - |
| 0.4721 | 279 | 2.3929 | - | - | - |
| 0.4738 | 280 | 2.3569 | - | - | - |
| 0.4755 | 281 | 1.0127 | - | - | - |
| 0.4772 | 282 | 1.9784 | - | - | - |
| 0.4788 | 283 | 1.988 | - | - | - |
| 0.4805 | 284 | 1.8243 | - | - | - |
| 0.4822 | 285 | 2.2862 | - | - | - |
| 0.4839 | 286 | 1.936 | - | - | - |
| 0.4856 | 287 | 1.8669 | - | - | - |
| 0.4873 | 288 | 1.4009 | - | - | - |
| 0.4890 | 289 | 0.517 | - | - | - |
| 0.4907 | 290 | 0.9652 | - | - | - |
| 0.4924 | 291 | 2.3806 | - | - | - |
| 0.4941 | 292 | 1.5087 | - | - | - |
| 0.4958 | 293 | 2.7513 | - | - | - |
| 0.4975 | 294 | 1.9004 | - | - | - |
| 0.4992 | 295 | 2.3847 | - | - | - |
| 0.5008 | 296 | 1.9346 | - | - | - |
| 0.5025 | 297 | 0.9983 | - | - | - |
| 0.5042 | 298 | 1.9259 | - | - | - |
| 0.5059 | 299 | 1.4515 | - | - | - |
| 0.5076 | 300 | 1.464 | 3.9250 | 0.0179 | 0.9907 |
| 0.5093 | 301 | 1.9055 | - | - | - |
| 0.5110 | 302 | 2.4128 | - | - | - |
| 0.5127 | 303 | 1.8788 | - | - | - |
| 0.5144 | 304 | 2.4005 | - | - | - |
| 0.5161 | 305 | 2.8882 | - | - | - |
| 0.5178 | 306 | 1.9386 | - | - | - |
| 0.5195 | 307 | 1.987 | - | - | - |
| 0.5212 | 308 | 1.4729 | - | - | - |
| 0.5228 | 309 | 1.9293 | - | - | - |
| 0.5245 | 310 | 0.9594 | - | - | - |
| 0.5262 | 311 | 2.3498 | - | - | - |
| 0.5279 | 312 | 2.3576 | - | - | - |
| 0.5296 | 313 | 1.4234 | - | - | - |
| 0.5313 | 314 | 0.9163 | - | - | - |
| 0.5330 | 315 | 1.9619 | - | - | - |
| 0.5347 | 316 | 2.4892 | - | - | - |
| 0.5364 | 317 | 1.4732 | - | - | - |
| 0.5381 | 318 | 2.3542 | - | - | - |
| 0.5398 | 319 | 2.9066 | - | - | - |
| 0.5415 | 320 | 1.4621 | - | - | - |
| 0.5431 | 321 | 3.3144 | - | - | - |
| 0.5448 | 322 | 2.7772 | - | - | - |
| 0.5465 | 323 | 2.8627 | - | - | - |
| 0.5482 | 324 | 1.8365 | - | - | - |
| 0.5499 | 325 | 2.4592 | - | - | - |
| 0.5516 | 326 | 2.3525 | - | - | - |
| 0.5533 | 327 | 1.4206 | - | - | - |
| 0.5550 | 328 | 2.3435 | - | - | - |
| 0.5567 | 329 | 1.9043 | - | - | - |
| 0.5584 | 330 | 1.4576 | - | - | - |
| 0.5601 | 331 | 0.9223 | - | - | - |
| 0.5618 | 332 | 2.3032 | - | - | - |
| 0.5635 | 333 | 1.9359 | - | - | - |
| 0.5651 | 334 | 1.477 | - | - | - |
| 0.5668 | 335 | 1.9312 | - | - | - |
| 0.5685 | 336 | 3.385 | - | - | - |
| 0.5702 | 337 | 2.3731 | - | - | - |
| 0.5719 | 338 | 1.8676 | - | - | - |
| 0.5736 | 339 | 1.4941 | - | - | - |
| 0.5753 | 340 | 2.956 | - | - | - |
| 0.5770 | 341 | 1.8991 | - | - | - |
| 0.5787 | 342 | 2.792 | - | - | - |
| 0.5804 | 343 | 0.9784 | - | - | - |
| 0.5821 | 344 | 1.8529 | - | - | - |
| 0.5838 | 345 | 2.8493 | - | - | - |
| 0.5854 | 346 | 0.9955 | - | - | - |
| 0.5871 | 347 | 2.8339 | - | - | - |
| 0.5888 | 348 | 1.8424 | - | - | - |
| 0.5905 | 349 | 1.4302 | - | - | - |
| 0.5922 | 350 | 1.3806 | - | - | - |
| 0.5939 | 351 | 1.8792 | - | - | - |
| 0.5956 | 352 | 2.44 | - | - | - |
| 0.5973 | 353 | 1.4441 | - | - | - |
| 0.5990 | 354 | 0.9948 | - | - | - |
| 0.6007 | 355 | 1.9267 | - | - | - |
| 0.6024 | 356 | 1.8865 | - | - | - |
| 0.6041 | 357 | 2.3951 | - | - | - |
| 0.6058 | 358 | 1.3543 | - | - | - |
| 0.6074 | 359 | 2.3722 | - | - | - |
| 0.6091 | 360 | 2.8528 | - | - | - |
| 0.6108 | 361 | 1.9579 | - | - | - |
| 0.6125 | 362 | 1.4187 | - | - | - |
| 0.6142 | 363 | 2.8474 | - | - | - |
| 0.6159 | 364 | 1.4787 | - | - | - |
| 0.6176 | 365 | 0.9802 | - | - | - |
| 0.6193 | 366 | 1.8556 | - | - | - |
| 0.6210 | 367 | 2.4519 | - | - | - |
| 0.6227 | 368 | 1.9073 | - | - | - |
| 0.6244 | 369 | 1.9379 | - | - | - |
| 0.6261 | 370 | 1.4542 | - | - | - |
| 0.6277 | 371 | 2.8597 | - | - | - |
| 0.6294 | 372 | 1.485 | - | - | - |
| 0.6311 | 373 | 1.9556 | - | - | - |
| 0.6328 | 374 | 1.4288 | - | - | - |
| 0.6345 | 375 | 1.9431 | - | - | - |
| 0.6362 | 376 | 1.9064 | - | - | - |
| 0.6379 | 377 | 2.327 | - | - | - |
| 0.6396 | 378 | 2.9043 | - | - | - |
| 0.6413 | 379 | 1.9608 | - | - | - |
| 0.6430 | 380 | 1.4286 | - | - | - |
| 0.6447 | 381 | 1.9443 | - | - | - |
| 0.6464 | 382 | 2.8463 | - | - | - |
| 0.6481 | 383 | 1.4638 | - | - | - |
| 0.6497 | 384 | 2.8708 | - | - | - |
| 0.6514 | 385 | 0.5008 | - | - | - |
| 0.6531 | 386 | 1.8324 | - | - | - |
| 0.6548 | 387 | 2.3995 | - | - | - |
| 0.6565 | 388 | 0.9239 | - | - | - |
| 0.6582 | 389 | 0.953 | - | - | - |
| 0.6599 | 390 | 0.9581 | - | - | - |
| 0.6616 | 391 | 1.9197 | - | - | - |
| 0.6633 | 392 | 0.9582 | - | - | - |
| 0.6650 | 393 | 1.4458 | - | - | - |
| 0.6667 | 394 | 1.4524 | - | - | - |
| 0.6684 | 395 | 1.4322 | - | - | - |
| 0.6701 | 396 | 1.8367 | - | - | - |
| 0.6717 | 397 | 0.9496 | - | - | - |
| 0.6734 | 398 | 1.4267 | - | - | - |
| 0.6751 | 399 | 2.9175 | - | - | - |
| 0.6768 | 400 | 1.4309 | 3.9022 | 0.0179 | 0.9908 |
| 0.6785 | 401 | 1.9559 | - | - | - |
| 0.6802 | 402 | 1.3724 | - | - | - |
| 0.6819 | 403 | 1.9655 | - | - | - |
| 0.6836 | 404 | 2.2448 | - | - | - |
| 0.6853 | 405 | 1.8538 | - | - | - |
| 0.6870 | 406 | 2.3575 | - | - | - |
| 0.6887 | 407 | 1.4074 | - | - | - |
| 0.6904 | 408 | 1.3687 | - | - | - |
| 0.6920 | 409 | 2.3241 | - | - | - |
| 0.6937 | 410 | 1.9014 | - | - | - |
| 0.6954 | 411 | 1.3877 | - | - | - |
| 0.6971 | 412 | 0.0192 | - | - | - |
| 0.6988 | 413 | 2.4055 | - | - | - |
| 0.7005 | 414 | 1.4111 | - | - | - |
| 0.7022 | 415 | 1.4222 | - | - | - |
| 0.7039 | 416 | 0.9365 | - | - | - |
| 0.7056 | 417 | 2.8339 | - | - | - |
| 0.7073 | 418 | 0.4793 | - | - | - |
| 0.7090 | 419 | 2.3388 | - | - | - |
| 0.7107 | 420 | 0.9691 | - | - | - |
| 0.7124 | 421 | 2.3049 | - | - | - |
| 0.7140 | 422 | 2.0049 | - | - | - |
| 0.7157 | 423 | 0.4498 | - | - | - |
| 0.7174 | 424 | 2.9309 | - | - | - |
| 0.7191 | 425 | 1.8395 | - | - | - |
| 0.7208 | 426 | 2.28 | - | - | - |
| 0.7225 | 427 | 2.3715 | - | - | - |
| 0.7242 | 428 | 1.8526 | - | - | - |
| 0.7259 | 429 | 2.3259 | - | - | - |
| 0.7276 | 430 | 1.4394 | - | - | - |
| 0.7293 | 431 | 1.3959 | - | - | - |
| 0.7310 | 432 | 1.9069 | - | - | - |
| 0.7327 | 433 | 1.845 | - | - | - |
| 0.7343 | 434 | 2.421 | - | - | - |
| 0.7360 | 435 | 1.4798 | - | - | - |
| 0.7377 | 436 | 1.3814 | - | - | - |
| 0.7394 | 437 | 2.299 | - | - | - |
| 0.7411 | 438 | 0.9778 | - | - | - |
| 0.7428 | 439 | 1.8638 | - | - | - |
| 0.7445 | 440 | 1.9017 | - | - | - |
| 0.7462 | 441 | 0.4421 | - | - | - |
| 0.7479 | 442 | 0.9385 | - | - | - |
| 0.7496 | 443 | 2.3258 | - | - | - |
| 0.7513 | 444 | 0.9822 | - | - | - |
| 0.7530 | 445 | 1.8375 | - | - | - |
| 0.7547 | 446 | 1.4373 | - | - | - |
| 0.7563 | 447 | 1.412 | - | - | - |
| 0.7580 | 448 | 1.4522 | - | - | - |
| 0.7597 | 449 | 0.0189 | - | - | - |
| 0.7614 | 450 | 1.8662 | - | - | - |
| 0.7631 | 451 | 0.9544 | - | - | - |
| 0.7648 | 452 | 2.3228 | - | - | - |
| 0.7665 | 453 | 1.3923 | - | - | - |
| 0.7682 | 454 | 0.922 | - | - | - |
| 0.7699 | 455 | 1.4387 | - | - | - |
| 0.7716 | 456 | 2.7728 | - | - | - |
| 0.7733 | 457 | 1.9294 | - | - | - |
| 0.7750 | 458 | 2.8621 | - | - | - |
| 0.7766 | 459 | 1.42 | - | - | - |
| 0.7783 | 460 | 2.3661 | - | - | - |
| 0.7800 | 461 | 1.9049 | - | - | - |
| 0.7817 | 462 | 0.0175 | - | - | - |
| 0.7834 | 463 | 2.4074 | - | - | - |
| 0.7851 | 464 | 1.8648 | - | - | - |
| 0.7868 | 465 | 1.8629 | - | - | - |
| 0.7885 | 466 | 2.7884 | - | - | - |
| 0.7902 | 467 | 1.3493 | - | - | - |
| 0.7919 | 468 | 2.403 | - | - | - |
| 0.7936 | 469 | 2.7929 | - | - | - |
| 0.7953 | 470 | 0.9255 | - | - | - |
| 0.7970 | 471 | 2.7728 | - | - | - |
| 0.7986 | 472 | 0.9824 | - | - | - |
| 0.8003 | 473 | 1.876 | - | - | - |
| 0.8020 | 474 | 1.3914 | - | - | - |
| 0.8037 | 475 | 0.4912 | - | - | - |
| 0.8054 | 476 | 2.3282 | - | - | - |
| 0.8071 | 477 | 0.908 | - | - | - |
| 0.8088 | 478 | 2.423 | - | - | - |
| 0.8105 | 479 | 1.42 | - | - | - |
| 0.8122 | 480 | 1.9588 | - | - | - |
| 0.8139 | 481 | 0.9519 | - | - | - |
| 0.8156 | 482 | 1.4129 | - | - | - |
| 0.8173 | 483 | 2.4084 | - | - | - |
| 0.8190 | 484 | 1.897 | - | - | - |
| 0.8206 | 485 | 1.4394 | - | - | - |
| 0.8223 | 486 | 2.8871 | - | - | - |
| 0.8240 | 487 | 0.9683 | - | - | - |
| 0.8257 | 488 | 0.9442 | - | - | - |
| 0.8274 | 489 | 1.9011 | - | - | - |
| 0.8291 | 490 | 1.8735 | - | - | - |
| 0.8308 | 491 | 2.3152 | - | - | - |
| 0.8325 | 492 | 2.3091 | - | - | - |
| 0.8342 | 493 | 2.3408 | - | - | - |
| 0.8359 | 494 | 2.3726 | - | - | - |
| 0.8376 | 495 | 1.4211 | - | - | - |
| 0.8393 | 496 | 0.9557 | - | - | - |
| 0.8409 | 497 | 2.3373 | - | - | - |
| 0.8426 | 498 | 0.9889 | - | - | - |
| 0.8443 | 499 | 1.459 | - | - | - |
| 0.8460 | 500 | 1.398 | 3.9300 | 0.0183 | 0.9908 |
| 0.8477 | 501 | 1.4072 | - | - | - |
| 0.8494 | 502 | 1.8293 | - | - | - |
| 0.8511 | 503 | 0.9472 | - | - | - |
| 0.8528 | 504 | 0.9628 | - | - | - |
| 0.8545 | 505 | 3.7849 | - | - | - |
| 0.8562 | 506 | 1.3792 | - | - | - |
| 0.8579 | 507 | 1.8149 | - | - | - |
| 0.8596 | 508 | 2.3405 | - | - | - |
| 0.8613 | 509 | 1.8663 | - | - | - |
| 0.8629 | 510 | 2.3751 | - | - | - |
| 0.8646 | 511 | 2.6911 | - | - | - |
| 0.8663 | 512 | 2.368 | - | - | - |
| 0.8680 | 513 | 2.3323 | - | - | - |
| 0.8697 | 514 | 1.4309 | - | - | - |
| 0.8714 | 515 | 0.9448 | - | - | - |
| 0.8731 | 516 | 1.4439 | - | - | - |
| 0.8748 | 517 | 2.3648 | - | - | - |
| 0.8765 | 518 | 2.3774 | - | - | - |
| 0.8782 | 519 | 1.448 | - | - | - |
| 0.8799 | 520 | 1.4461 | - | - | - |
| 0.8816 | 521 | 2.3183 | - | - | - |
| 0.8832 | 522 | 0.9882 | - | - | - |
| 0.8849 | 523 | 0.957 | - | - | - |
| 0.8866 | 524 | 1.9394 | - | - | - |
| 0.8883 | 525 | 2.3117 | - | - | - |
| 0.8900 | 526 | 1.8949 | - | - | - |
| 0.8917 | 527 | 1.4605 | - | - | - |
| 0.8934 | 528 | 1.8327 | - | - | - |
| 0.8951 | 529 | 1.8671 | - | - | - |
| 0.8968 | 530 | 2.3391 | - | - | - |
| 0.8985 | 531 | 1.4219 | - | - | - |
| 0.9002 | 532 | 1.8474 | - | - | - |
| 0.9019 | 533 | 1.9673 | - | - | - |
| 0.9036 | 534 | 3.2427 | - | - | - |
| 0.9052 | 535 | 1.3797 | - | - | - |
| 0.9069 | 536 | 2.3618 | - | - | - |
| 0.9086 | 537 | 2.7532 | - | - | - |
| 0.9103 | 538 | 1.4237 | - | - | - |
| 0.9120 | 539 | 1.9271 | - | - | - |
| 0.9137 | 540 | 2.323 | - | - | - |
| 0.9154 | 541 | 1.9416 | - | - | - |
| 0.9171 | 542 | 1.8712 | - | - | - |
| 0.9188 | 543 | 2.8092 | - | - | - |
| 0.9205 | 544 | 2.2987 | - | - | - |
| 0.9222 | 545 | 1.8482 | - | - | - |
| 0.9239 | 546 | 1.4584 | - | - | - |
| 0.9255 | 547 | 2.3785 | - | - | - |
| 0.9272 | 548 | 2.3385 | - | - | - |
| 0.9289 | 549 | 1.8966 | - | - | - |
| 0.9306 | 550 | 1.8058 | - | - | - |
| 0.9323 | 551 | 1.4287 | - | - | - |
| 0.9340 | 552 | 1.9003 | - | - | - |
| 0.9357 | 553 | 0.9654 | - | - | - |
| 0.9374 | 554 | 1.3663 | - | - | - |
| 0.9391 | 555 | 1.346 | - | - | - |
| 0.9408 | 556 | 1.8707 | - | - | - |
| 0.9425 | 557 | 1.4324 | - | - | - |
| 0.9442 | 558 | 1.8819 | - | - | - |
| 0.9459 | 559 | 2.4909 | - | - | - |
| 0.9475 | 560 | 0.9675 | - | - | - |
| 0.9492 | 561 | 0.922 | - | - | - |
| 0.9509 | 562 | 2.847 | - | - | - |
| 0.9526 | 563 | 1.958 | - | - | - |
| 0.9543 | 564 | 1.3759 | - | - | - |
| 0.9560 | 565 | 1.4143 | - | - | - |
| 0.9577 | 566 | 1.3712 | - | - | - |
| 0.9594 | 567 | 1.8556 | - | - | - |
| 0.9611 | 568 | 0.9588 | - | - | - |
| 0.9628 | 569 | 2.3149 | - | - | - |
| 0.9645 | 570 | 1.8656 | - | - | - |
| 0.9662 | 571 | 2.3964 | - | - | - |
| 0.9679 | 572 | 0.9769 | - | - | - |
| 0.9695 | 573 | 1.9084 | - | - | - |
| 0.9712 | 574 | 3.3196 | - | - | - |
| 0.9729 | 575 | 2.3983 | - | - | - |
| 0.9746 | 576 | 1.9741 | - | - | - |
| 0.9763 | 577 | 1.3721 | - | - | - |
| 0.9780 | 578 | 2.3973 | - | - | - |
| 0.9797 | 579 | 1.9 | - | - | - |
| 0.9814 | 580 | 2.4559 | - | - | - |
| 0.9831 | 581 | 0.47 | - | - | - |
| 0.9848 | 582 | 1.8629 | - | - | - |
| 0.9865 | 583 | 0.8912 | - | - | - |
| 0.9882 | 584 | 0.9528 | - | - | - |
| 0.9898 | 585 | 1.4244 | - | - | - |
| 0.9915 | 586 | 1.4412 | - | - | - |
| 0.9932 | 587 | 1.5217 | - | - | - |
| 0.9949 | 588 | 2.3198 | - | - | - |
| 0.9966 | 589 | 1.8179 | - | - | - |
| 0.9983 | 590 | 1.9013 | - | - | - |
| 1.0 | 591 | 2.4358 | - | - | - |
</details>
### Framework Versions
- Python: 3.11.10
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.5.1+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
#### ContrastiveLoss
```bibtex
@inproceedings{hadsell2006dimensionality,
author={Hadsell, R. and Chopra, S. and LeCun, Y.},
booktitle={2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)},
title={Dimensionality Reduction by Learning an Invariant Mapping},
year={2006},
volume={2},
number={},
pages={1735-1742},
doi={10.1109/CVPR.2006.100}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
3mily1u/fim-codegen-350m-mono-dpoed-control-25-1
|
3mily1u
| 2025-04-25T23:12:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"codegen",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T23:11:16Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MrRobotoAI/C10
|
MrRobotoAI
| 2025-04-25T23:11:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"base_model:MrRobotoAI/110",
"base_model:merge:MrRobotoAI/110",
"base_model:MrRobotoAI/C9",
"base_model:merge:MrRobotoAI/C9",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T22:38:56Z |
---
base_model:
- MrRobotoAI/C9
- MrRobotoAI/110
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [MrRobotoAI/110](https://huggingface.co/MrRobotoAI/110) as a base.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/C9](https://huggingface.co/MrRobotoAI/C9)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: task_arithmetic
models:
- model: MrRobotoAI/110
parameters:
weight:
- filter: v_proj
value: [0.8, 0.8, 0.75, 0.55, 0.35, 0.15, 0.35, 0.55, 0.75, 0.8, 0.8]
- filter: o_proj
value: [0.8, 0.8, 0.75, 0.55, 0.35, 0.15, 0.35, 0.55, 0.75, 0.8, 0.8]
- filter: up_proj
value: [0.8, 0.8, 0.75, 0.55, 0.35, 0.15, 0.35, 0.55, 0.75, 0.8, 0.8]
- filter: gate_proj
value: [0.8, 0.8, 0.75, 0.55, 0.35, 0.15, 0.35, 0.55, 0.75, 0.8, 0.8]
- filter: down_proj
value: [0.8, 0.8, 0.75, 0.55, 0.35, 0.15, 0.35, 0.55, 0.75, 0.8, 0.8]
- value: 1
- model: MrRobotoAI/C9
parameters:
weight:
- filter: v_proj
value: [0.2, 0.2, 0.25, 0.45, 0.65, 0.85, 0.65, 0.45, 0.25, 0.2, 0.2]
- filter: o_proj
value: [0.2, 0.2, 0.25, 0.45, 0.65, 0.85, 0.65, 0.45, 0.25, 0.2, 0.2]
- filter: up_proj
value: [0.2, 0.2, 0.25, 0.45, 0.65, 0.85, 0.65, 0.45, 0.25, 0.2, 0.2]
- filter: gate_proj
value: [0.2, 0.2, 0.25, 0.45, 0.65, 0.85, 0.65, 0.45, 0.25, 0.2, 0.2]
- filter: down_proj
value: [0.2, 0.2, 0.25, 0.45, 0.65, 0.85, 0.65, 0.45, 0.25, 0.2, 0.2]
- value: 0
base_model: MrRobotoAI/110
tokenizer_source: base
dtype: bfloat16
```
|
MrRobotoAI/C9
|
MrRobotoAI
| 2025-04-25T23:10:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:MrRobotoAI/C6",
"base_model:merge:MrRobotoAI/C6",
"base_model:MrRobotoAI/C8",
"base_model:merge:MrRobotoAI/C8",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T21:40:53Z |
---
base_model:
- MrRobotoAI/C8
- MrRobotoAI/C6
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/C8](https://huggingface.co/MrRobotoAI/C8)
* [MrRobotoAI/C6](https://huggingface.co/MrRobotoAI/C6)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
# Replaces MrRobotoAI/131
models:
- model: MrRobotoAI/C8 # Replaces 127
- model: MrRobotoAI/C6 # Replaces concept-unlearning/Meta-Llama-3-8B_ft_lora_all_novels_v1_ft_npo_gdr_lora_HICS_v1
merge_method: slerp
base_model: MrRobotoAI/C8
dtype: bfloat16
parameters:
t: [0, 0.25, 0.5, 0.25, 0]
```
|
MrRobotoAI/C7
|
MrRobotoAI
| 2025-04-25T23:10:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:MrRobotoAI/B2",
"base_model:finetune:MrRobotoAI/B2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T22:06:04Z |
---
base_model:
- MrRobotoAI/B5
- MrRobotoAI/B2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the Passthrough merge method.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/B5](https://huggingface.co/MrRobotoAI/B5)
* [MrRobotoAI/B2](https://huggingface.co/MrRobotoAI/B2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
# Replaces MrRobotoAI/B5
slices:
- sources:
- model: MrRobotoAI/B2
layer_range: [0, 3]
- sources:
- model: MrRobotoAI/B5
layer_range: [3, 29]
- sources:
- model: MrRobotoAI/B2
layer_range: [29, 32]
merge_method: passthrough
dtype: float16
```
|
MrRobotoAI/C6
|
MrRobotoAI
| 2025-04-25T23:10:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:MrRobotoAI/C4",
"base_model:merge:MrRobotoAI/C4",
"base_model:MrRobotoAI/C5",
"base_model:merge:MrRobotoAI/C5",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T22:01:01Z |
---
base_model:
- MrRobotoAI/C4
- MrRobotoAI/C5
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/C4](https://huggingface.co/MrRobotoAI/C4)
* [MrRobotoAI/C5](https://huggingface.co/MrRobotoAI/C5)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
# Replaces concept-unlearning/Meta-Llama-3-8B_ft_lora_all_novels_v1_ft_npo_gdr_lora_HICS_v1
models:
- model: MrRobotoAI/C4
- model: MrRobotoAI/C5
merge_method: slerp
base_model: MrRobotoAI/C4
dtype: bfloat16
parameters:
t: [0.5, 0.5, 0.5, 0.5, 0.5]
```
|
deeponh/mal_8b_3b_D1
|
deeponh
| 2025-04-25T23:06:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-25T23:04:11Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BootesVoid/cm9xd9xp50066rbgi1au8cue9_cm9xde4p8006prbgity97qpvx
|
BootesVoid
| 2025-04-25T23:04:29Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-04-25T23:04:26Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ROMANCE25
---
# Cm9Xd9Xp50066Rbgi1Au8Cue9_Cm9Xde4P8006Prbgity97Qpvx
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ROMANCE25` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ROMANCE25",
"lora_weights": "https://huggingface.co/BootesVoid/cm9xd9xp50066rbgi1au8cue9_cm9xde4p8006prbgity97qpvx/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cm9xd9xp50066rbgi1au8cue9_cm9xde4p8006prbgity97qpvx', weight_name='lora.safetensors')
image = pipeline('ROMANCE25').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cm9xd9xp50066rbgi1au8cue9_cm9xde4p8006prbgity97qpvx/discussions) to add images that show off what you’ve made with this LoRA.
|
deeponh/mal_8b_8b_D1
|
deeponh
| 2025-04-25T23:02:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-25T22:57:29Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jerryzh168/phi4-mini-int4wo-gemlite
|
jerryzh168
| 2025-04-25T23:02:22Z | 628 | 0 |
transformers
|
[
"transformers",
"pytorch",
"phi3",
"text-generation",
"torchao",
"phi",
"phi4",
"nlp",
"code",
"math",
"chat",
"conversational",
"custom_code",
"multilingual",
"base_model:microsoft/Phi-4-mini-instruct",
"base_model:quantized:microsoft/Phi-4-mini-instruct",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-16T00:59:25Z |
---
library_name: transformers
tags:
- torchao
- phi
- phi4
- nlp
- code
- math
- chat
- conversational
license: mit
language:
- multilingual
base_model:
- microsoft/Phi-4-mini-instruct
pipeline_tag: text-generation
---
[Phi4-mini](https://huggingface.co/microsoft/Phi-4-mini-instruct) model quantized with [torchao](https://huggingface.co/docs/transformers/main/en/quantization/torchao) int4 weight only quantization with gemlite kernels, by PyTorch team.
# Installation
```
pip install transformers
pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cu126
pip install [email protected]:EleutherAI/lm-evaluation-harness.git
pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly
pip install git+https://github.com/mobiusml/gemlite/
```
# Quantization Recipe
We used following code to get the quantized model:
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig
model_id = "microsoft/Phi-4-mini-instruct"
from torchao.quantization import GemliteUIntXWeightOnlyConfig
quant_config = GemliteUIntXWeightOnlyConfig()
quantization_config = TorchAoConfig(quant_type=quant_config)
quantized_model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype=torch.float16, quantization_config=quantization_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Push to hub
USER_ID = "YOUR_USER_ID"
save_to = f"{USER_ID}/{model_id}-int4wo-gemlite"
quantized_model.push_to_hub(save_to, safe_serialization=False)
tokenizer.push_to_hub(save_to)
# Manual Testing
prompt = "Hey, are you conscious? Can you talk to me?"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
generated_ids = quantized_model.generate(**inputs, max_new_tokens=128)
output_text = tokenizer.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
# Local Benchmark
import torch.utils.benchmark as benchmark
from torchao.utils import benchmark_model
import torchao
def benchmark_fn(f, *args, **kwargs):
# Manual warmup
for _ in range(2):
f(*args, **kwargs)
t0 = benchmark.Timer(
stmt="f(*args, **kwargs)",
globals={"args": args, "kwargs": kwargs, "f": f},
num_threads=torch.get_num_threads(),
)
return f"{(t0.blocked_autorange().mean):.3f}"
torchao.quantization.utils.recommended_inductor_config_setter()
quantized_model = torch.compile(quantized_model, mode="max-autotune")
print(f"{save_to} model:", benchmark_fn(quantized_model.generate, **inputs, max_new_tokens=128))
```
# Model Quality
We rely on [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate the quality of the quantized model.
## Installing the nightly version to get most recent updates
```
pip install git+https://github.com/EleutherAI/lm-evaluation-harness
```
## baseline
```
lm_eval --model hf --model_args pretrained=microsoft/Phi-4-mini-instruct --tasks hellaswag --device cuda:0 --batch_size 8
```
## int4wo-gemlite
```
lm_eval --model hf --model_args pretrained=jerryzh168/phi4-mini-int4wo-gemlite --tasks hellaswag --device cuda:0 --batch_size 8
```
`TODO: more complete eval results`
| Benchmark | | |
|----------------------------------|-------------|-------------------|
| | Phi-4 mini-Ins | phi4-mini-int4wo-gemlite |
| **Popular aggregated benchmark** | | |
| **Reasoning** | | |
| HellaSwag | 54.57 | 53.51 |
| **Multilingual** | | |
| **Math** | | |
| **Overall** | **TODO** | **TODO** |
# Model Performance
Our int4wo is only optimized for batch size 1, so we'll only benchmark the batch size 1 performance with vllm.
For batch size N, please see our [gemlite checkpoint](https://huggingface.co/jerryzh168/phi4-mini-int4wo-gemlite).
## Download vllm source code and install vllm
```
git clone [email protected]:vllm-project/vllm.git
VLLM_USE_PRECOMPILED=1 pip install .
```
## Download dataset
Download sharegpt dataset: `wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json`
Other datasets can be found in: https://github.com/vllm-project/vllm/tree/main/benchmarks
## benchmark_latency
Run the following under `vllm` source code root folder:
### baseline
```
python benchmarks/benchmark_latency.py --input-len 256 --output-len 256 --model microsoft/Phi-4-mini-instruct --batch-size 1
```
### int4wo-gemlite
```
python benchmarks/benchmark_latency.py --input-len 256 --output-len 256 --model jerryzh168/phi4-mini-int4wo-gemlite --batch-size 1
```
## benchmark_serving
We also benchmarked the throughput in a serving environment.
Run the following under `vllm` source code root folder:
### baseline
Server:
```
vllm serve microsoft/Phi-4-mini-instruct --tokenizer microsoft/Phi-4-mini-instruct -O3
```
Client:
```
python benchmarks/benchmark_serving.py --backend vllm --dataset-name sharegpt --tokenizer microsoft/Phi-4-mini-instruct --dataset-path ./ShareGPT_V3_unfiltered_cleaned_split.json --model microsoft/Phi-4-mini-instruct --num-prompts 1
```
### int4wo-gemlite
Server:
```
vllm serve jerryzh168/phi4-mini-int4wo-gemlite --tokenizer microsoft/Phi-4-mini-instruct -O3
```
Client:
```
python benchmarks/benchmark_serving.py --backend vllm --dataset-name sharegpt --tokenizer microsoft/Phi-4-mini-instruct --dataset-path ./ShareGPT_V3_unfiltered_cleaned_split.json --model jerryzh168/phi4-mini-int4wo-hqq --num-prompts 1
```
# Serving with vllm
We can use the same command we used in serving benchmarks to serve the model with vllm
```
vllm serve jerryzh168/phi4-mini-int4wo-gemlite --tokenizer microsoft/Phi-4-mini-instruct -O3
```
|
kk-aivio/3a4f1757-da46-41b1-8d5f-d6bb2f9f5a3e
|
kk-aivio
| 2025-04-25T23:01:57Z | 0 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:unsloth/Phi-3-mini-4k-instruct",
"base_model:adapter:unsloth/Phi-3-mini-4k-instruct",
"region:us"
] | null | 2025-04-25T23:01:16Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/Phi-3-mini-4k-instruct
model-index:
- name: kk-aivio/3a4f1757-da46-41b1-8d5f-d6bb2f9f5a3e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kk-aivio/3a4f1757-da46-41b1-8d5f-d6bb2f9f5a3e
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1599
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.