modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-05-25 18:27:02
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 476
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-05-25 18:24:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
dgambettaphd/M_llm3_gen3_run0_WXS_doc1000_synt64_FRESH | dgambettaphd | 2025-04-21T04:11:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-21T04:11:36Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tarsur909/pythia1b-oai-summary-ppo-1ep-translated-gap_new | tarsur909 | 2025-04-21T04:06:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T04:05:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mengqizou011438/merged-llama3.2-1B-financial | mengqizou011438 | 2025-04-21T04:06:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T03:46:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kk-aivio/d5399b1e-4658-4f29-a3d3-85404cbed562 | kk-aivio | 2025-04-21T04:01:39Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:unsloth/Phi-3-mini-4k-instruct",
"base_model:adapter:unsloth/Phi-3-mini-4k-instruct",
"region:us"
] | null | 2025-04-21T04:01:07Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/Phi-3-mini-4k-instruct
model-index:
- name: kk-aivio/d5399b1e-4658-4f29-a3d3-85404cbed562
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kk-aivio/d5399b1e-4658-4f29-a3d3-85404cbed562
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8389
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
Srijaaa/bridgegpt_finetuned_UTC_book | Srijaaa | 2025-04-21T04:00:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T03:12:30Z | ---
library_name: transformers
model_name: bridgegpt_finetuned_UTC_book
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for bridgegpt_finetuned_UTC_book
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Srijaaa/bridgegpt_finetuned_UTC_book", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.6.0
- Datasets: 3.2.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
AravindS373/sft_model | AravindS373 | 2025-04-21T03:59:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T03:57:14Z | ---
base_model: unsloth/qwen2.5-coder-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AravindS373
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-coder-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xiaoyuanliu/Qwen2.5-3B-simplerl-ppo-online.critique-012-p2 | xiaoyuanliu | 2025-04-21T03:58:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T03:54:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tawankri/DeepCoder-1.5B-Preview-mlx-fp16 | tawankri | 2025-04-21T03:56:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mlx",
"conversational",
"en",
"dataset:PrimeIntellect/verifiable-coding-problems",
"dataset:likaixin/TACO-verified",
"dataset:livecodebench/code_generation_lite",
"base_model:agentica-org/DeepCoder-1.5B-Preview",
"base_model:finetune:agentica-org/DeepCoder-1.5B-Preview",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T03:56:35Z | ---
license: mit
library_name: transformers
datasets:
- PrimeIntellect/verifiable-coding-problems
- likaixin/TACO-verified
- livecodebench/code_generation_lite
language:
- en
base_model: agentica-org/DeepCoder-1.5B-Preview
pipeline_tag: text-generation
tags:
- mlx
---
# tawankri/DeepCoder-1.5B-Preview-mlx-fp16
The Model [tawankri/DeepCoder-1.5B-Preview-mlx-fp16](https://huggingface.co/tawankri/DeepCoder-1.5B-Preview-mlx-fp16) was converted to MLX format from [agentica-org/DeepCoder-1.5B-Preview](https://huggingface.co/agentica-org/DeepCoder-1.5B-Preview) using mlx-lm version **0.22.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("tawankri/DeepCoder-1.5B-Preview-mlx-fp16")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Yeongi/q-FrozenLake-v1-4x4-noSlippery | Yeongi | 2025-04-21T03:55:04Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-21T03:54:59Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Yeongi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Bradlk/hi | Bradlk | 2025-04-21T03:52:03Z | 0 | 0 | null | [
"en",
"dataset:nvidia/OpenCodeReasoning",
"base_model:deepseek-ai/DeepSeek-V3-0324",
"base_model:finetune:deepseek-ai/DeepSeek-V3-0324",
"license:unknown",
"region:us"
] | null | 2025-04-21T03:48:55Z | ---
license: unknown
datasets:
- nvidia/OpenCodeReasoning
language:
- en
metrics:
- accuracy
- code_eval
base_model:
- deepseek-ai/DeepSeek-V3-0324
--- |
alan314159/Plant_Seedlings_Classification | alan314159 | 2025-04-21T03:51:07Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-21T03:49:06Z | ---
license: apache-2.0
---
|
aeqw2a0/ilil2222kjkk | aeqw2a0 | 2025-04-21T03:49:51Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-21T03:46:51Z | ---
license: apache-2.0
---
|
omarViga/mabama-35-2-v1 | omarViga | 2025-04-21T03:48:17Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-schnell",
"base_model:adapter:black-forest-labs/FLUX.1-schnell",
"license:mit",
"region:us"
] | text-to-image | 2025-04-21T03:48:05Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
Sexy mabama in black lingerie, she lays down in her cosy bed wearing sexy
black lingerie.
parameters:
negative_prompt: low quality
output:
url: images/chocolate-cup1.png
base_model: black-forest-labs/FLUX.1-schnell
instance_prompt: mabama
license: mit
---
# mabama-35-2-v1
<Gallery />
## Model description
mab
## Trigger words
You should use `mabama` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/omarViga/mabama-35-2-v1/tree/main) them in the Files & versions tab.
|
jckim/stt-turbo-multilingual-v0.0.5 | jckim | 2025-04-21T03:43:55Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"multilingual",
"base_model:openai/whisper-large-v3-turbo",
"base_model:adapter:openai/whisper-large-v3-turbo",
"license:mit",
"region:us"
] | null | 2025-04-21T03:43:45Z | ---
library_name: peft
language:
- multilingual
license: mit
base_model: openai/whisper-large-v3-turbo
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Turbo Multilingual
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Turbo Multilingual
This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the custom_multilingual dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0905
- Wer: 12.1973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.1009 | 0.7561 | 200 | 0.1303 | 15.5299 |
| 0.0657 | 1.5123 | 400 | 0.0861 | 11.3086 |
| 0.0412 | 2.2684 | 600 | 0.0889 | 11.7307 |
| 0.0352 | 3.0246 | 800 | 0.0935 | 12.5305 |
| 0.0243 | 3.7807 | 1000 | 0.0905 | 12.1973 |
### Framework versions
- PEFT 0.15.2.dev0
- Transformers 4.46.3
- Pytorch 2.3.1+cu121
- Datasets 3.0.0
- Tokenizers 0.20.3 |
nannnzk/task-7-microsoft-phi-4 | nannnzk | 2025-04-21T03:40:44Z | 234 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/phi-4",
"base_model:adapter:microsoft/phi-4",
"region:us"
] | null | 2025-04-20T01:39:17Z | ---
base_model: microsoft/phi-4
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
creasson/malaysian-whisper-large-v3-turbo-v3-finetuned | creasson | 2025-04-21T03:39:54Z | 0 | 0 | null | [
"safetensors",
"whisper",
"license:apache-2.0",
"region:us"
] | null | 2025-04-20T12:23:15Z | ---
license: apache-2.0
---
|
fedovtt/08a46e1b-7324-403b-816c-e6777ecd4e6b | fedovtt | 2025-04-21T03:36:23Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:sethuiyer/Medichat-Llama3-8B",
"base_model:adapter:sethuiyer/Medichat-Llama3-8B",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-21T02:37:45Z | ---
library_name: peft
license: other
base_model: sethuiyer/Medichat-Llama3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 08a46e1b-7324-403b-816c-e6777ecd4e6b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: sethuiyer/Medichat-Llama3-8B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f609fa30a3041e03_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f609fa30a3041e03_train_data.json
type:
field_input: rejected
field_instruction: prompt
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: fedovtt/08a46e1b-7324-403b-816c-e6777ecd4e6b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/f609fa30a3041e03_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1c3cb4e4-1860-48e4-b0b9-9c000fc5ceb1
wandb_project: 01-31
wandb_run: your_name
wandb_runid: 1c3cb4e4-1860-48e4-b0b9-9c000fc5ceb1
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 08a46e1b-7324-403b-816c-e6777ecd4e6b
This model is a fine-tuned version of [sethuiyer/Medichat-Llama3-8B](https://huggingface.co/sethuiyer/Medichat-Llama3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1095 | 0.0552 | 200 | 1.3791 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
apend10/bart-finetuned-neutral | apend10 | 2025-04-21T03:35:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-21T03:30:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ZMC2019/Qwen7B-MP | ZMC2019 | 2025-04-21T03:35:13Z | 329 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:open-r1/OpenR1-Math-220k",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-18T06:20:22Z | ---
datasets: open-r1/OpenR1-Math-220k
library_name: transformers
model_name: Qwen7B-MP
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen7B-MP
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ZMC2019/Qwen7B-MP", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/randomresearch/sparsity/runs/gg7id6of)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Zhihu-ai/Zhi-writing-dsr1-14b | Zhihu-ai | 2025-04-21T03:31:11Z | 32 | 13 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"zh",
"en",
"dataset:Congliu/Chinese-DeepSeek-R1-Distill-data-110k",
"dataset:cognitivecomputations/dolphin-r1",
"dataset:open-thoughts/OpenThoughts-114k",
"dataset:qihoo360/Light-R1-SFTData",
"dataset:qihoo360/Light-R1-DPOData",
"arxiv:2406.18629",
"arxiv:2402.13228",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-19T02:23:41Z | ---
license: apache-2.0
datasets:
- Congliu/Chinese-DeepSeek-R1-Distill-data-110k
- cognitivecomputations/dolphin-r1
- open-thoughts/OpenThoughts-114k
- qihoo360/Light-R1-SFTData
- qihoo360/Light-R1-DPOData
language:
- zh
- en
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
tags:
- qwen2
library_name: transformers
---
# Zhi-writing-dsr1-14b
## 1. Introduction
Zhi-writing-dsr1-14b is a fine-tuned model based on DeepSeek-R1-Distill-Qwen-14B, specifically optimized for enhanced creative writing capabilities. Several benchmark evaluations indicate the model's improved creative writing performance.
In the [LLM Creative Story-Writing Benchmark](https://github.com/lechmazur/writing), the model achieved a score of **8.33** compared to its base model's **7.8**. In the [WritingBench](https://github.com/X-PLUG/WritingBench) evaluation framework, it scored **8.46**, showing improvement over DeepSeek-R1-Distill-Qwen-14B's **7.93**. The model was also evaluated using GPT-4o on the AlpacaEval dataset, achieving an **82.6%** win rate when compared with the base model.
The figure below shows the performance comparison across different domains in WritingBench:

<figcaption style="text-align:center; font-size:0.9em; color:#666">
Figure 1: WritingBench performance of Zhi-writing-dsr1-14b and DeepSeek-R1-Distill-Qwen-14B across 6 domains and 3 writing requirements evaluated with WritingBench critic model (scale: 1-10). The six domains include: (D1) Academic & Engineering, (D2) Finance & Business, (D3) Politics & Law, (D4) Literature & Art, (D5) Education, and (D6) Advertising & Marketing. The three writing requirements assessed are: (R1) Style, (R2) Format, and (R3) Length. Here, "C" indicates category-specific scores.
</figcaption>
## 2. Training Process
### Data
The model's training corpus comprises three primary data sources: rigorously filtered open-source datasets, chain-of-thought reasoning corpora, and curated question-answer pairs from Zhihu.
To achieve optimal domain coverage, we meticulously balanced the distribution of various datasets, including [Dolphin-r1](https://huggingface.co/datasets/cognitivecomputations/dolphin-r1), [Congliu/Chinese-DeepSeek-R1-Distill-data-110k](https://huggingface.co/datasets/Congliu/Chinese-DeepSeek-R1-Distill-data-110k), [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k), [Light-R1-SFTData](https://huggingface.co/datasets/qihoo360/Light-R1-SFTData), and [Light-R1-DPOData](https://huggingface.co/datasets/qihoo360/Light-R1-DPOData), alongside high-quality content from Zhihu. All datasets underwent comprehensive quality assurance through our Reward Model (RM) filtering pipeline.
### Training
**Supervised Fine-tuning (SFT)**: We employed a curriculum learning strategy for supervised fine-tuning. This methodical approach systematically enhances creative writing capabilities while incorporating diverse domain data to maintain core competencies and mitigate catastrophic forgetting.
**Direct Preference Optimization (DPO)**: For scenarios involving minimal edit distances, we utilized Step-DPO ([arxiv:2406.18629](https://arxiv.org/abs/2406.18629)) to selectively penalize incorrect tokens, while incorporating positive constraints in the loss function as proposed in DPOP ([arXiv:2402.13228](https://arxiv.org/abs/2402.13228)).
## 3. Evaluation Results
Our evaluation results suggest promising improvements in the model's creative writing capabilities. In the LLM Creative Story-Writing Benchmark evaluation, the model achieved a score of **8.33**, showing an improvement from the base model's **7.87**. When assessed on WritingBench, a comprehensive framework for evaluating large language model writing abilities, the model attained a score of **8.46**. This places it in proximity to DeepSeek-R1's performance and represents an advancement over DeepSeek-R1-Distill-Qwen-14B's score of 7.93.
With respect to general capabilities, evaluations indicate modest improvements of **2%–5% in knowledge and reasoning tasks (CMMLU, MMLU-Pro)**, alongside encouraging progress in mathematical reasoning as measured by benchmarks such as **AIME-2024, AIME-2025, and GSM8K**. The results suggest that the model maintains a balanced performance profile, with improvements observed across creative writing, knowledge/reasoning, and mathematical tasks compared to DeepSeek-R1-Distill-Qwen-14B. These characteristics potentially make it suitable for a range of general-purpose applications.

<figcaption style="text-align:center; font-size:0.9em; color:#666">
Figure 2: When evaluating model performance, it is recommended to conduct multiple tests and average the results. (We use n=16 and max_tokens=32768 for mathematical tasks and n=2 for others)
</figcaption>
## 4. How to Run Locally
Zhi-writing-dsr1-14b can be deployed on various hardware configurations, including GPUs with 80GB memory, a single H20/A800/H800, or dual RTX 4090. Additionally, the INT4 quantized version Zhi-writing-dsr1-14b-gptq-int4 can be deployed on a single RTX 4090.
### Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
MODEL_NAME = "Zhihu-ai/Zhi-writing-dsr1-14b"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, trust_remote_code=True)
# use bf16
# model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, device_map="auto", trust_remote_code=True, bf16=True).eval()
# use fp16
# model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, device_map="auto", trust_remote_code=True, fp16=True).eval()
# use cpu only
# model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, device_map="cpu", trust_remote_code=True).eval()
# use auto mode, automatically select precision based on the device.
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
device_map="auto",
trust_remote_code=True
).eval()
# Specify hyperparameters for generation. But if you use transformers>=4.32.0, there is no need to do this.
# model.generation_config = GenerationConfig.from_pretrained(MODEL_NAME, trust_remote_code=True)
generate_configs = {
"temperature": 0.6,
"do_sample": True,
"top_p": 0.95,
"max_new_tokens": 4096
}
prompt = "请你以鲁迅的口吻,写一篇介绍西湖醋鱼的文章"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
**generate_configs
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
### ZhiLight
You can easily start a service using [ZhiLight](https://github.com/zhihu/ZhiLight)
```bash
docker run -it --net=host --gpus='"device=0"' -v /path/to/model:/mnt/models --entrypoints="" ghcr.io/zhihu/zhilight/zhilight:0.4.17-cu124 python -m zhilight.server.openai.entrypoints.api_server --model-path /mnt/models --port 8000 --enable-reasoning --reasoning-parser deepseek-r1 --served-model-name Zhi-writing-dsr1-14b
curl http://localhost:8000/v1/completions \
-H "Content-Type: application/json" \
-d '{
"model": "Zhi-writing-dsr1-14b",
"prompt": "请你以鲁迅的口吻,写一篇介绍西湖醋鱼的文章",
"max_tokens": 4096,
"temperature": 0.6,
"top_p": 0.95
}'
```
### vllm
For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm)
```bash
# install vllm
pip install vllm>=0.6.4.post1
# huggingface model id
vllm serve Zhihu-ai/Zhi-writing-dsr1-14b --served-model-name Zhi-writing-dsr1-14b --port 8000
# local path
vllm serve /path/to/model --served-model-name Zhi-writing-dsr1-14b --port 8000
curl http://localhost:8000/v1/completions \
-H "Content-Type: application/json" \
-d '{
"model": "Zhi-writing-dsr1-14b",
"prompt": "请你以鲁迅的口吻,写一篇介绍西湖醋鱼的文章",
"max_tokens": 4096,
"temperature": 0.6,
"top_p": 0.95
}'
```
### SGLang
You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang)
```bash
# install SGLang
pip install "sglang[all]>=0.4.5" --find-links https://flashinfer.ai/whl/cu124/torch2.5/flashinfer-python
# huggingface model id
python -m sglang.launch_server --model-path Zhihu-ai/Zhi-writing-dsr1-14b --served-model-name Zhi-writing-dsr1-14b --port 8000
# local path
python -m sglang.launch_server --model-path /path/to/model --served-model-name Zhi-writing-dsr1-14b --port 8000
# send request
curl http://localhost:8000/v1/completions \
-H "Content-Type: application/json" \
-d '{
"model": "Zhi-writing-dsr1-14b",
"prompt": "请你以鲁迅的口吻,写一篇介绍西湖醋鱼的文章",
"max_tokens": 4096,
"temperature": 0.6,
"top_p": 0.95
}'
```
### ollama
You can download ollama using [this](https://ollama.com/download/)
* quantization: Q4_K_M
```bash
ollama run zhihu/zhi-writing-dsr1-14b
```
* bf16
```bash
ollama run zhihu/zhi-writing-dsr1-14b:bf16
```
## 5. Usage Recommendations
We recommend adhering to the following configurations when utilizing the Zhi-writing-dsr1-14b, including benchmarking, to achieve the expected performance:
* Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs.
* When evaluating model performance, it is recommended to conduct multiple tests and average the results. (We use `n=16` and `max_tokens=32768` for mathematical tasks and `n=2` for others)
* To ensure that the model engages in thorough reasoning like DeepSeek-R1 series models, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.
## 6. Citation
```text
@misc{Zhi-writing-dsr1-14b,
title={Zhi-writing-dsr1-14b: Curriculum Reinforcement and Direct Preference Optimization for Robust Creative Writing in LLMs},
author={Jiewu Wang, Xu Chen, Wenyuan Su, Chao Huang, Hongkui Gao, Lin Feng, Shan Wang, Lu Xu, Penghe Liu, Zebin Ou},
year={2025},
eprint={},
archivePrefix={},
url={https://huggingface.co/Zhihu-ai/Zhi-writing-dsr1-14b},
}
```
## 7. Contact
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]). |
elliotthwang/llama-2-7b-chat_zh | elliotthwang | 2025-04-21T03:30:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T03:12:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
神奇的中文化訓練
將NousResearch/Llama-2-7b-chat-hf中文化訓練
訓練損失 竟然是 loss: 0.0000
|
dongrihua/qq | dongrihua | 2025-04-21T03:24:47Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-21T03:24:47Z | ---
license: apache-2.0
---
|
hendrydong/qwen-7b-raft-cliphigh-step200 | hendrydong | 2025-04-21T03:23:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T03:20:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DavidAU/Llama-3.1-1million-ctx-Dark-Planet-v1.01-8B | DavidAU | 2025-04-21T03:22:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:DavidAU/Llama-3.1-1million-ctx-Dark-Planet-8B",
"base_model:merge:DavidAU/Llama-3.1-1million-ctx-Dark-Planet-8B",
"base_model:Hastagaras/Jamet-8B-L3-MK.V-Blackroot",
"base_model:merge:Hastagaras/Jamet-8B-L3-MK.V-Blackroot",
"base_model:NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS",
"base_model:merge:NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS",
"base_model:Sao10K/L3-8B-Stheno-v3.2",
"base_model:merge:Sao10K/L3-8B-Stheno-v3.2",
"base_model:nvidia/Llama-3.1-Nemotron-8B-UltraLong-1M-Instruct",
"base_model:merge:nvidia/Llama-3.1-Nemotron-8B-UltraLong-1M-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T00:38:02Z | ---
library_name: transformers
tags:
- mergekit
- merge
base_model:
- Sao10K/L3-8B-Stheno-v3.2
- NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
- Hastagaras/Jamet-8B-L3-MK.V-Blackroot
- nvidia/Llama-3.1-Nemotron-8B-UltraLong-1M-Instruct
- DavidAU/Llama-3.1-1million-ctx-Dark-Planet-8B
---
<h2>Llama-3.1-1million-ctx-Dark-Planet-v1.01-8B</h2>
This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats.
The source code can also be used directly.
"V1.01" has modifications to address some issues related to non-stop/overly long gen and/or repeat "end paragraph" issues. I am keeping the org quants too, because of the difference in
creative generation between the two versions is very strong. I am not saying "reg" is better than "v1.01", they are
just different, and you should have the choice between both in my opinion.
The "GGUF" link at the bottom of the page links to repo with both V1.01 and "reg" quants in the repo.
NOTE: If you intend to make GGUF quants, it is suggested to make the master file in float32 ("f32") then quant from this file due
to float 32 components / models in this merge.
(source files will be uploaded when parameter count shows in upper left)
NOTE: Links to GGUFs below.
<B>IMPORTANT: Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
If you are going to use this model, (source, GGUF or a different quant), please review this document for critical parameter, sampler and advance sampler settings (for multiple AI/LLM aps).
This a "Class 3/4" (settings will enhance operation) model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) (especially for use case(s) beyond the model's design) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
REASON:
Regardless of "model class" this document will detail methods to enhance operations.
If the model is a Class 3/4 model the default settings (parameters, samplers, advanced samplers) must be set for "use case(s)" uses correctly. Some AI/LLM apps DO NOT have consistant default setting(s) which result in sub-par model operation. Like wise for Class 3/4 models (which operate somewhat to very differently than standard models) additional samplers and advanced samplers settings are required to "smooth out" operation, AND/OR also allow full operation for use cases the model was not designed for.
BONUS - Use these settings for ANY model, ANY repo, ANY quant (including source/full precision):
This document also details parameters, sampler and advanced samplers that can be use FOR ANY MODEL, FROM ANY REPO too - all quants, and of course source code operation too - to enhance the operation of any model.
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
NOTE:
I strongly suggest you also visit the DavidAU GGUF (below) repo too for more details in using this model ; especially if it is "Class 3" or "Class 4" to get maximum performance from the model.
For full information about this model, including:
- Details about this model and its use case(s).
- Context limits
- Special usage notes / settings.
- Any model(s) used to create this model.
- Template(s) used to access/use this model.
- Example generation(s)
- GGUF quants of this model
Please go to:
[ https://huggingface.co/DavidAU/Llama-3.1-1-million-cxt-Dark-Planet-8B-GGUF ] |
hZzy/mistral-7b-expo-7b-L2EXPO-25-last-3 | hZzy | 2025-04-21T03:21:23Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"alignment-handbook",
"ndcg",
"trl",
"expo",
"generated_from_trainer",
"dataset:hZzy/direction_right2",
"base_model:hZzy/mistral-7b-sft-25-1",
"base_model:adapter:hZzy/mistral-7b-sft-25-1",
"license:apache-2.0",
"region:us"
] | null | 2025-04-20T20:13:16Z | ---
base_model: hZzy/mistral-7b-sft-25-1
datasets:
- hZzy/direction_right2
library_name: peft
license: apache-2.0
tags:
- alignment-handbook
- ndcg
- trl
- expo
- generated_from_trainer
model-index:
- name: mistral-7b-expo-7b-L2EXPO-25-last-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/zhiyuzha-university-of-florida/huggingface/runs/6e78lbhy)
# mistral-7b-expo-7b-L2EXPO-25-last-3
This model is a fine-tuned version of [hZzy/mistral-7b-sft-25-1](https://huggingface.co/hZzy/mistral-7b-sft-25-1) on the hZzy/direction_right2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4652
- Objective: 0.4665
- Reward Accuracy: 0.6468
- Logp Accuracy: 0.5380
- Log Diff Policy: 1.7463
- Chosen Logps: -88.9876
- Rejected Logps: -90.7340
- Chosen Rewards: 0.5695
- Rejected Rewards: 0.4330
- Logits: -2.1608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- gradient_accumulation_steps: 12
- total_train_batch_size: 108
- total_eval_batch_size: 9
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Objective | Reward Accuracy | Logp Accuracy | Log Diff Policy | Chosen Logps | Rejected Logps | Chosen Rewards | Rejected Rewards | Logits |
|:-------------:|:------:|:----:|:---------------:|:---------:|:---------------:|:-------------:|:---------------:|:------------:|:--------------:|:--------------:|:----------------:|:-------:|
| 0.5816 | 0.0758 | 50 | 0.5092 | 0.5064 | 0.5489 | 0.5176 | 0.4504 | -93.1500 | -93.6004 | 0.1532 | 0.1464 | -2.1905 |
| 0.5803 | 0.1517 | 100 | 0.4981 | 0.4935 | 0.5710 | 0.5246 | 0.7658 | -94.0984 | -94.8642 | 0.0584 | 0.0200 | -2.2166 |
| 0.6056 | 0.2275 | 150 | 0.4821 | 0.4811 | 0.6035 | 0.5280 | 1.0402 | -92.9769 | -94.0170 | 0.1705 | 0.1047 | -2.2026 |
| 0.5299 | 0.3033 | 200 | 0.4781 | 0.4783 | 0.6177 | 0.5338 | 1.2448 | -91.3817 | -92.6265 | 0.3301 | 0.2438 | -2.2070 |
| 0.5156 | 0.3792 | 250 | 0.4757 | 0.4785 | 0.6205 | 0.5352 | 1.3596 | -92.2695 | -93.6291 | 0.2413 | 0.1435 | -2.2315 |
| 0.5013 | 0.4550 | 300 | 0.4743 | 0.4760 | 0.6312 | 0.5322 | 1.5243 | -91.0031 | -92.5274 | 0.3679 | 0.2537 | -2.2311 |
| 0.4959 | 0.5308 | 350 | 0.4681 | 0.4693 | 0.6337 | 0.5333 | 1.5031 | -90.2225 | -91.7256 | 0.4460 | 0.3339 | -2.2133 |
| 0.4667 | 0.6067 | 400 | 0.4647 | 0.4667 | 0.6395 | 0.5358 | 1.6181 | -91.8421 | -93.4602 | 0.2840 | 0.1604 | -2.1876 |
| 0.4661 | 0.6825 | 450 | 0.4663 | 0.4689 | 0.6298 | 0.5330 | 1.6059 | -90.0967 | -91.7026 | 0.4586 | 0.3362 | -2.1883 |
| 0.5 | 0.7583 | 500 | 0.4699 | 0.4724 | 0.6306 | 0.5361 | 1.6815 | -87.4541 | -89.1356 | 0.7228 | 0.5929 | -2.1850 |
| 0.4319 | 0.8342 | 550 | 0.4681 | 0.4718 | 0.6267 | 0.5366 | 1.7006 | -88.2031 | -89.9036 | 0.6479 | 0.5161 | -2.1868 |
| 0.4536 | 0.9100 | 600 | 0.4632 | 0.4665 | 0.6278 | 0.5358 | 1.6002 | -89.8265 | -91.4267 | 0.4856 | 0.3638 | -2.1747 |
| 0.4925 | 0.9858 | 650 | 0.4657 | 0.4683 | 0.6309 | 0.5380 | 1.7545 | -91.7867 | -93.5412 | 0.2896 | 0.1523 | -2.1635 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.0
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.19.1 |
notzero/qwen1_5b_test | notzero | 2025-04-21T03:19:23Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen2",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-20T02:47:29Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tarsur909/pythia1b-oai-summary-ppo-1ep | tarsur909 | 2025-04-21T03:18:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T03:18:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
uscCodingStudent101/gpt2-2tl-v3 | uscCodingStudent101 | 2025-04-21T03:18:32Z | 0 | 0 | null | [
"safetensors",
"gpt2",
"text-generation",
"license:mit",
"region:us"
] | text-generation | 2025-04-21T03:17:14Z | ---
license: mit
pipeline_tag: text-generation
--- |
hendrydong/qwen-7b-raft-cliphigh-step160 | hendrydong | 2025-04-21T03:14:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T03:11:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
IParraMartin/impossible-llms-dutch-fronting-bigram | IParraMartin | 2025-04-21T03:12:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-20T21:38:24Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: impossible-llms-dutch-fronting-bigram
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# impossible-llms-dutch-fronting-bigram
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.4259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 0
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 384
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3000
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:--------:|:----:|:---------------:|
| 83.6673 | 0.9180 | 7 | 10.1956 |
| 75.7206 | 1.9180 | 14 | 9.3458 |
| 72.3098 | 2.9180 | 21 | 8.9449 |
| 70.2821 | 3.9180 | 28 | 8.7390 |
| 69.0478 | 4.9180 | 35 | 8.5625 |
| 68.338 | 5.9180 | 42 | 8.4006 |
| 66.1562 | 6.9180 | 49 | 8.2220 |
| 64.9807 | 7.9180 | 56 | 8.0375 |
| 63.2723 | 8.9180 | 63 | 7.8572 |
| 61.0906 | 9.9180 | 70 | 7.6713 |
| 60.2245 | 10.9180 | 77 | 7.4791 |
| 59.196 | 11.9180 | 84 | 7.2779 |
| 56.8529 | 12.9180 | 91 | 7.0854 |
| 55.7183 | 13.9180 | 98 | 6.8852 |
| 54.0638 | 14.9180 | 105 | 6.6934 |
| 52.6141 | 15.9180 | 112 | 6.5213 |
| 51.2209 | 16.9180 | 119 | 6.3724 |
| 49.964 | 17.9180 | 126 | 6.2568 |
| 49.4053 | 18.9180 | 133 | 6.1501 |
| 48.3958 | 19.9180 | 140 | 6.0563 |
| 48.2846 | 20.9180 | 147 | 6.0040 |
| 47.9167 | 21.9180 | 154 | 5.9494 |
| 47.3996 | 22.9180 | 161 | 5.9178 |
| 46.9891 | 23.9180 | 168 | 5.8925 |
| 46.7405 | 24.9180 | 175 | 5.8723 |
| 46.6686 | 25.9180 | 182 | 5.8477 |
| 46.6996 | 26.9180 | 189 | 5.8223 |
| 46.5263 | 27.9180 | 196 | 5.8009 |
| 46.1017 | 28.9180 | 203 | 5.7802 |
| 46.1647 | 29.9180 | 210 | 5.7587 |
| 46.1546 | 30.9180 | 217 | 5.7441 |
| 45.7013 | 31.9180 | 224 | 5.7213 |
| 45.0503 | 32.9180 | 231 | 5.7076 |
| 45.2251 | 33.9180 | 238 | 5.6904 |
| 44.8943 | 34.9180 | 245 | 5.6772 |
| 44.9269 | 35.9180 | 252 | 5.6669 |
| 44.902 | 36.9180 | 259 | 5.6532 |
| 44.2823 | 37.9180 | 266 | 5.6402 |
| 44.6306 | 38.9180 | 273 | 5.6322 |
| 44.1941 | 39.9180 | 280 | 5.6278 |
| 44.4267 | 40.9180 | 287 | 5.6130 |
| 44.0962 | 41.9180 | 294 | 5.6022 |
| 43.9981 | 42.9180 | 301 | 5.5818 |
| 44.065 | 43.9180 | 308 | 5.5700 |
| 44.0451 | 44.9180 | 315 | 5.5602 |
| 43.7268 | 45.9180 | 322 | 5.5563 |
| 43.3324 | 46.9180 | 329 | 5.5343 |
| 43.1901 | 47.9180 | 336 | 5.5254 |
| 43.2259 | 48.9180 | 343 | 5.5198 |
| 43.2377 | 49.9180 | 350 | 5.4982 |
| 43.2266 | 50.9180 | 357 | 5.4944 |
| 42.7451 | 51.9180 | 364 | 5.4823 |
| 42.2536 | 52.9180 | 371 | 5.4781 |
| 42.2201 | 53.9180 | 378 | 5.4613 |
| 41.6915 | 54.9180 | 385 | 5.4499 |
| 42.0139 | 55.9180 | 392 | 5.4467 |
| 41.6444 | 56.9180 | 399 | 5.4400 |
| 41.2884 | 57.9180 | 406 | 5.4345 |
| 41.7892 | 58.9180 | 413 | 5.4238 |
| 41.3308 | 59.9180 | 420 | 5.4166 |
| 41.3869 | 60.9180 | 427 | 5.4145 |
| 40.7788 | 61.9180 | 434 | 5.4142 |
| 40.7967 | 62.9180 | 441 | 5.3992 |
| 40.3866 | 63.9180 | 448 | 5.4041 |
| 40.5721 | 64.9180 | 455 | 5.3941 |
| 40.5428 | 65.9180 | 462 | 5.3920 |
| 40.3906 | 66.9180 | 469 | 5.3858 |
| 40.151 | 67.9180 | 476 | 5.3839 |
| 39.6923 | 68.9180 | 483 | 5.3856 |
| 39.939 | 69.9180 | 490 | 5.3755 |
| 39.6868 | 70.9180 | 497 | 5.3749 |
| 39.6195 | 71.9180 | 504 | 5.3781 |
| 39.4678 | 72.9180 | 511 | 5.3854 |
| 39.1837 | 73.9180 | 518 | 5.3784 |
| 38.9759 | 74.9180 | 525 | 5.3770 |
| 38.5049 | 75.9180 | 532 | 5.3767 |
| 39.226 | 76.9180 | 539 | 5.3781 |
| 39.0352 | 77.9180 | 546 | 5.3835 |
| 38.1275 | 78.9180 | 553 | 5.3864 |
| 38.3793 | 79.9180 | 560 | 5.3914 |
| 38.0253 | 80.9180 | 567 | 5.3863 |
| 37.9907 | 81.9180 | 574 | 5.3995 |
| 38.0236 | 82.9180 | 581 | 5.4028 |
| 37.6736 | 83.9180 | 588 | 5.4061 |
| 38.1138 | 84.9180 | 595 | 5.4162 |
| 37.4033 | 85.9180 | 602 | 5.4258 |
| 37.1398 | 86.9180 | 609 | 5.4253 |
| 36.6317 | 87.9180 | 616 | 5.4345 |
| 36.4137 | 88.9180 | 623 | 5.4452 |
| 36.9893 | 89.9180 | 630 | 5.4385 |
| 36.8399 | 90.9180 | 637 | 5.4598 |
| 36.3405 | 91.9180 | 644 | 5.4627 |
| 36.4991 | 92.9180 | 651 | 5.4632 |
| 36.3748 | 93.9180 | 658 | 5.4811 |
| 35.9608 | 94.9180 | 665 | 5.4745 |
| 35.9226 | 95.9180 | 672 | 5.4863 |
| 35.9751 | 96.9180 | 679 | 5.5081 |
| 35.5547 | 97.9180 | 686 | 5.4970 |
| 35.2164 | 98.9180 | 693 | 5.5214 |
| 34.9061 | 99.9180 | 700 | 5.5231 |
| 34.9274 | 100.9180 | 707 | 5.5416 |
| 34.9502 | 101.9180 | 714 | 5.5474 |
| 34.9347 | 102.9180 | 721 | 5.5456 |
| 34.5548 | 103.9180 | 728 | 5.5700 |
| 34.8556 | 104.9180 | 735 | 5.5781 |
| 34.6057 | 105.9180 | 742 | 5.5848 |
| 34.4437 | 106.9180 | 749 | 5.5971 |
| 34.1718 | 107.9180 | 756 | 5.6073 |
| 34.0907 | 108.9180 | 763 | 5.6244 |
| 33.6848 | 109.9180 | 770 | 5.6291 |
| 33.5163 | 110.9180 | 777 | 5.6382 |
| 33.4478 | 111.9180 | 784 | 5.6576 |
| 33.3843 | 112.9180 | 791 | 5.6750 |
| 32.958 | 113.9180 | 798 | 5.6639 |
| 33.0119 | 114.9180 | 805 | 5.6917 |
| 32.8102 | 115.9180 | 812 | 5.6936 |
| 32.6526 | 116.9180 | 819 | 5.7121 |
| 32.4103 | 117.9180 | 826 | 5.7240 |
| 32.1539 | 118.9180 | 833 | 5.7442 |
| 32.4264 | 119.9180 | 840 | 5.7544 |
| 32.255 | 120.9180 | 847 | 5.7611 |
| 32.1105 | 121.9180 | 854 | 5.7661 |
| 31.9384 | 122.9180 | 861 | 5.7819 |
| 31.7077 | 123.9180 | 868 | 5.7929 |
| 31.4754 | 124.9180 | 875 | 5.8061 |
| 31.2536 | 125.9180 | 882 | 5.8106 |
| 31.2701 | 126.9180 | 889 | 5.8371 |
| 30.912 | 127.9180 | 896 | 5.8546 |
| 30.834 | 128.9180 | 903 | 5.8566 |
| 30.8271 | 129.9180 | 910 | 5.8626 |
| 30.6406 | 130.9180 | 917 | 5.8845 |
| 30.1948 | 131.9180 | 924 | 5.9023 |
| 30.5228 | 132.9180 | 931 | 5.9115 |
| 30.3403 | 133.9180 | 938 | 5.9134 |
| 29.9515 | 134.9180 | 945 | 5.9286 |
| 29.9577 | 135.9180 | 952 | 5.9542 |
| 29.677 | 136.9180 | 959 | 5.9454 |
| 29.6693 | 137.9180 | 966 | 5.9675 |
| 29.4862 | 138.9180 | 973 | 5.9812 |
| 29.1925 | 139.9180 | 980 | 5.9978 |
| 29.3036 | 140.9180 | 987 | 6.0049 |
| 29.0343 | 141.9180 | 994 | 6.0295 |
| 28.8367 | 142.9180 | 1001 | 6.0296 |
| 28.5935 | 143.9180 | 1008 | 6.0414 |
| 28.5507 | 144.9180 | 1015 | 6.0576 |
| 28.5536 | 145.9180 | 1022 | 6.0778 |
| 28.5476 | 146.9180 | 1029 | 6.0790 |
| 28.4762 | 147.9180 | 1036 | 6.0833 |
| 28.3717 | 148.9180 | 1043 | 6.1077 |
| 28.0584 | 149.9180 | 1050 | 6.1150 |
| 27.8302 | 150.9180 | 1057 | 6.1315 |
| 27.6707 | 151.9180 | 1064 | 6.1347 |
| 27.4556 | 152.9180 | 1071 | 6.1526 |
| 27.6272 | 153.9180 | 1078 | 6.1656 |
| 27.1733 | 154.9180 | 1085 | 6.1874 |
| 27.266 | 155.9180 | 1092 | 6.1981 |
| 27.0323 | 156.9180 | 1099 | 6.2104 |
| 26.9986 | 157.9180 | 1106 | 6.2195 |
| 26.873 | 158.9180 | 1113 | 6.2223 |
| 26.6617 | 159.9180 | 1120 | 6.2467 |
| 26.4292 | 160.9180 | 1127 | 6.2545 |
| 26.4855 | 161.9180 | 1134 | 6.2671 |
| 26.215 | 162.9180 | 1141 | 6.2774 |
| 26.1489 | 163.9180 | 1148 | 6.2915 |
| 26.3328 | 164.9180 | 1155 | 6.2961 |
| 25.8904 | 165.9180 | 1162 | 6.3048 |
| 25.8863 | 166.9180 | 1169 | 6.3237 |
| 25.8484 | 167.9180 | 1176 | 6.3325 |
| 25.6263 | 168.9180 | 1183 | 6.3506 |
| 25.4471 | 169.9180 | 1190 | 6.3469 |
| 25.4162 | 170.9180 | 1197 | 6.3608 |
| 25.3895 | 171.9180 | 1204 | 6.3787 |
| 25.0886 | 172.9180 | 1211 | 6.3826 |
| 25.051 | 173.9180 | 1218 | 6.3959 |
| 24.9231 | 174.9180 | 1225 | 6.4132 |
| 24.6853 | 175.9180 | 1232 | 6.4270 |
| 24.8123 | 176.9180 | 1239 | 6.4290 |
| 24.5636 | 177.9180 | 1246 | 6.4452 |
| 24.5249 | 178.9180 | 1253 | 6.4572 |
| 24.25 | 179.9180 | 1260 | 6.4652 |
| 24.2591 | 180.9180 | 1267 | 6.4670 |
| 24.0907 | 181.9180 | 1274 | 6.4866 |
| 24.013 | 182.9180 | 1281 | 6.4951 |
| 23.8728 | 183.9180 | 1288 | 6.5100 |
| 23.9411 | 184.9180 | 1295 | 6.5066 |
| 23.7083 | 185.9180 | 1302 | 6.5290 |
| 23.7176 | 186.9180 | 1309 | 6.5326 |
| 23.5182 | 187.9180 | 1316 | 6.5541 |
| 23.3504 | 188.9180 | 1323 | 6.5660 |
| 23.4175 | 189.9180 | 1330 | 6.5668 |
| 23.217 | 190.9180 | 1337 | 6.5860 |
| 23.1964 | 191.9180 | 1344 | 6.5914 |
| 22.909 | 192.9180 | 1351 | 6.5891 |
| 22.9406 | 193.9180 | 1358 | 6.6052 |
| 22.9394 | 194.9180 | 1365 | 6.6192 |
| 22.8075 | 195.9180 | 1372 | 6.6292 |
| 22.6948 | 196.9180 | 1379 | 6.6334 |
| 22.6171 | 197.9180 | 1386 | 6.6415 |
| 22.5568 | 198.9180 | 1393 | 6.6495 |
| 22.5281 | 199.9180 | 1400 | 6.6619 |
| 22.3907 | 200.9180 | 1407 | 6.6728 |
| 22.2757 | 201.9180 | 1414 | 6.6944 |
| 22.3475 | 202.9180 | 1421 | 6.6898 |
| 22.1154 | 203.9180 | 1428 | 6.6898 |
| 22.0423 | 204.9180 | 1435 | 6.7088 |
| 21.9327 | 205.9180 | 1442 | 6.7211 |
| 21.7245 | 206.9180 | 1449 | 6.7252 |
| 21.8766 | 207.9180 | 1456 | 6.7375 |
| 21.6527 | 208.9180 | 1463 | 6.7348 |
| 21.7386 | 209.9180 | 1470 | 6.7472 |
| 21.6643 | 210.9180 | 1477 | 6.7558 |
| 21.3764 | 211.9180 | 1484 | 6.7740 |
| 21.3917 | 212.9180 | 1491 | 6.7777 |
| 21.4281 | 213.9180 | 1498 | 6.7824 |
| 21.1757 | 214.9180 | 1505 | 6.7908 |
| 21.2494 | 215.9180 | 1512 | 6.7976 |
| 21.1086 | 216.9180 | 1519 | 6.8047 |
| 21.0251 | 217.9180 | 1526 | 6.8199 |
| 20.8031 | 218.9180 | 1533 | 6.8263 |
| 20.9154 | 219.9180 | 1540 | 6.8258 |
| 20.8454 | 220.9180 | 1547 | 6.8398 |
| 20.6033 | 221.9180 | 1554 | 6.8448 |
| 20.5957 | 222.9180 | 1561 | 6.8610 |
| 20.6872 | 223.9180 | 1568 | 6.8638 |
| 20.3843 | 224.9180 | 1575 | 6.8703 |
| 20.4692 | 225.9180 | 1582 | 6.8917 |
| 20.4153 | 226.9180 | 1589 | 6.8809 |
| 20.2434 | 227.9180 | 1596 | 6.8948 |
| 20.3918 | 228.9180 | 1603 | 6.9083 |
| 20.1626 | 229.9180 | 1610 | 6.9046 |
| 20.261 | 230.9180 | 1617 | 6.9204 |
| 20.0765 | 231.9180 | 1624 | 6.9252 |
| 19.9718 | 232.9180 | 1631 | 6.9314 |
| 19.9375 | 233.9180 | 1638 | 6.9385 |
| 19.9831 | 234.9180 | 1645 | 6.9447 |
| 19.9072 | 235.9180 | 1652 | 6.9436 |
| 19.7478 | 236.9180 | 1659 | 6.9597 |
| 19.6264 | 237.9180 | 1666 | 6.9608 |
| 19.5997 | 238.9180 | 1673 | 6.9732 |
| 19.5747 | 239.9180 | 1680 | 6.9861 |
| 19.4862 | 240.9180 | 1687 | 6.9806 |
| 19.4703 | 241.9180 | 1694 | 6.9854 |
| 19.4437 | 242.9180 | 1701 | 6.9917 |
| 19.3692 | 243.9180 | 1708 | 7.0022 |
| 19.2464 | 244.9180 | 1715 | 7.0061 |
| 19.2336 | 245.9180 | 1722 | 7.0171 |
| 19.1552 | 246.9180 | 1729 | 7.0197 |
| 19.1323 | 247.9180 | 1736 | 7.0175 |
| 19.077 | 248.9180 | 1743 | 7.0307 |
| 19.0595 | 249.9180 | 1750 | 7.0348 |
| 18.9852 | 250.9180 | 1757 | 7.0430 |
| 18.9572 | 251.9180 | 1764 | 7.0453 |
| 18.993 | 252.9180 | 1771 | 7.0570 |
| 18.8989 | 253.9180 | 1778 | 7.0581 |
| 18.691 | 254.9180 | 1785 | 7.0699 |
| 18.8285 | 255.9180 | 1792 | 7.0669 |
| 18.6967 | 256.9180 | 1799 | 7.0757 |
| 18.5931 | 257.9180 | 1806 | 7.0807 |
| 18.6768 | 258.9180 | 1813 | 7.0865 |
| 18.5537 | 259.9180 | 1820 | 7.0934 |
| 18.539 | 260.9180 | 1827 | 7.0953 |
| 18.3686 | 261.9180 | 1834 | 7.1071 |
| 18.4691 | 262.9180 | 1841 | 7.1061 |
| 18.2915 | 263.9180 | 1848 | 7.1186 |
| 18.1952 | 264.9180 | 1855 | 7.1096 |
| 18.121 | 265.9180 | 1862 | 7.1228 |
| 18.3078 | 266.9180 | 1869 | 7.1228 |
| 18.2565 | 267.9180 | 1876 | 7.1326 |
| 18.2128 | 268.9180 | 1883 | 7.1317 |
| 18.0796 | 269.9180 | 1890 | 7.1313 |
| 18.0938 | 270.9180 | 1897 | 7.1441 |
| 18.1037 | 271.9180 | 1904 | 7.1533 |
| 18.0365 | 272.9180 | 1911 | 7.1559 |
| 17.8114 | 273.9180 | 1918 | 7.1561 |
| 17.8991 | 274.9180 | 1925 | 7.1635 |
| 17.8748 | 275.9180 | 1932 | 7.1700 |
| 17.7981 | 276.9180 | 1939 | 7.1745 |
| 17.7959 | 277.9180 | 1946 | 7.1880 |
| 17.7333 | 278.9180 | 1953 | 7.1809 |
| 17.7708 | 279.9180 | 1960 | 7.1966 |
| 17.724 | 280.9180 | 1967 | 7.1890 |
| 17.5739 | 281.9180 | 1974 | 7.2033 |
| 17.6699 | 282.9180 | 1981 | 7.2033 |
| 17.6359 | 283.9180 | 1988 | 7.2064 |
| 17.5734 | 284.9180 | 1995 | 7.2038 |
| 17.6082 | 285.9180 | 2002 | 7.2078 |
| 17.4631 | 286.9180 | 2009 | 7.2178 |
| 17.4105 | 287.9180 | 2016 | 7.2228 |
| 17.4747 | 288.9180 | 2023 | 7.2299 |
| 17.4737 | 289.9180 | 2030 | 7.2346 |
| 17.3741 | 290.9180 | 2037 | 7.2325 |
| 17.3798 | 291.9180 | 2044 | 7.2365 |
| 17.374 | 292.9180 | 2051 | 7.2448 |
| 17.2849 | 293.9180 | 2058 | 7.2478 |
| 17.3266 | 294.9180 | 2065 | 7.2528 |
| 17.2119 | 295.9180 | 2072 | 7.2465 |
| 17.2125 | 296.9180 | 2079 | 7.2552 |
| 17.1324 | 297.9180 | 2086 | 7.2549 |
| 17.0641 | 298.9180 | 2093 | 7.2625 |
| 17.095 | 299.9180 | 2100 | 7.2642 |
| 17.1231 | 300.9180 | 2107 | 7.2685 |
| 16.9683 | 301.9180 | 2114 | 7.2621 |
| 17.0125 | 302.9180 | 2121 | 7.2700 |
| 17.0242 | 303.9180 | 2128 | 7.2754 |
| 16.902 | 304.9180 | 2135 | 7.2836 |
| 17.0353 | 305.9180 | 2142 | 7.2812 |
| 16.9487 | 306.9180 | 2149 | 7.2794 |
| 16.8356 | 307.9180 | 2156 | 7.2958 |
| 16.7948 | 308.9180 | 2163 | 7.2922 |
| 16.8632 | 309.9180 | 2170 | 7.2923 |
| 16.7753 | 310.9180 | 2177 | 7.2994 |
| 16.7484 | 311.9180 | 2184 | 7.3001 |
| 16.7657 | 312.9180 | 2191 | 7.3089 |
| 16.7394 | 313.9180 | 2198 | 7.3070 |
| 16.7323 | 314.9180 | 2205 | 7.3080 |
| 16.6862 | 315.9180 | 2212 | 7.3139 |
| 16.6908 | 316.9180 | 2219 | 7.3175 |
| 16.6181 | 317.9180 | 2226 | 7.3198 |
| 16.713 | 318.9180 | 2233 | 7.3233 |
| 16.5816 | 319.9180 | 2240 | 7.3198 |
| 16.5054 | 320.9180 | 2247 | 7.3268 |
| 16.6918 | 321.9180 | 2254 | 7.3292 |
| 16.6408 | 322.9180 | 2261 | 7.3231 |
| 16.5806 | 323.9180 | 2268 | 7.3302 |
| 16.5555 | 324.9180 | 2275 | 7.3388 |
| 16.5342 | 325.9180 | 2282 | 7.3398 |
| 16.4235 | 326.9180 | 2289 | 7.3405 |
| 16.5023 | 327.9180 | 2296 | 7.3402 |
| 16.4626 | 328.9180 | 2303 | 7.3428 |
| 16.3941 | 329.9180 | 2310 | 7.3406 |
| 16.4279 | 330.9180 | 2317 | 7.3508 |
| 16.4444 | 331.9180 | 2324 | 7.3459 |
| 16.3836 | 332.9180 | 2331 | 7.3510 |
| 16.4184 | 333.9180 | 2338 | 7.3566 |
| 16.3631 | 334.9180 | 2345 | 7.3561 |
| 16.3384 | 335.9180 | 2352 | 7.3603 |
| 16.3244 | 336.9180 | 2359 | 7.3589 |
| 16.2741 | 337.9180 | 2366 | 7.3628 |
| 16.304 | 338.9180 | 2373 | 7.3642 |
| 16.2746 | 339.9180 | 2380 | 7.3642 |
| 16.3027 | 340.9180 | 2387 | 7.3663 |
| 16.242 | 341.9180 | 2394 | 7.3688 |
| 16.2367 | 342.9180 | 2401 | 7.3676 |
| 16.1752 | 343.9180 | 2408 | 7.3685 |
| 16.1994 | 344.9180 | 2415 | 7.3741 |
| 16.108 | 345.9180 | 2422 | 7.3765 |
| 16.1362 | 346.9180 | 2429 | 7.3781 |
| 16.2155 | 347.9180 | 2436 | 7.3800 |
| 16.2048 | 348.9180 | 2443 | 7.3775 |
| 16.1283 | 349.9180 | 2450 | 7.3837 |
| 16.1557 | 350.9180 | 2457 | 7.3850 |
| 16.1228 | 351.9180 | 2464 | 7.3871 |
| 16.0919 | 352.9180 | 2471 | 7.3901 |
| 16.0663 | 353.9180 | 2478 | 7.3876 |
| 16.0835 | 354.9180 | 2485 | 7.3865 |
| 16.0743 | 355.9180 | 2492 | 7.3877 |
| 16.1027 | 356.9180 | 2499 | 7.3859 |
| 16.0455 | 357.9180 | 2506 | 7.3990 |
| 16.0158 | 358.9180 | 2513 | 7.3962 |
| 16.1156 | 359.9180 | 2520 | 7.3955 |
| 16.0399 | 360.9180 | 2527 | 7.3975 |
| 15.9918 | 361.9180 | 2534 | 7.4001 |
| 15.9691 | 362.9180 | 2541 | 7.4001 |
| 15.9589 | 363.9180 | 2548 | 7.4017 |
| 15.9023 | 364.9180 | 2555 | 7.4012 |
| 15.9877 | 365.9180 | 2562 | 7.4013 |
| 15.9899 | 366.9180 | 2569 | 7.4031 |
| 15.859 | 367.9180 | 2576 | 7.4029 |
| 16.0035 | 368.9180 | 2583 | 7.4008 |
| 15.9595 | 369.9180 | 2590 | 7.4076 |
| 15.9003 | 370.9180 | 2597 | 7.4072 |
| 15.9346 | 371.9180 | 2604 | 7.4094 |
| 15.9182 | 372.9180 | 2611 | 7.4060 |
| 15.8456 | 373.9180 | 2618 | 7.4104 |
| 15.9017 | 374.9180 | 2625 | 7.4097 |
| 15.9036 | 375.9180 | 2632 | 7.4126 |
| 15.8435 | 376.9180 | 2639 | 7.4117 |
| 15.8846 | 377.9180 | 2646 | 7.4136 |
| 15.838 | 378.9180 | 2653 | 7.4110 |
| 15.8668 | 379.9180 | 2660 | 7.4139 |
| 15.8037 | 380.9180 | 2667 | 7.4129 |
| 15.8425 | 381.9180 | 2674 | 7.4157 |
| 15.8884 | 382.9180 | 2681 | 7.4145 |
| 15.7861 | 383.9180 | 2688 | 7.4151 |
| 15.8327 | 384.9180 | 2695 | 7.4176 |
| 15.8519 | 385.9180 | 2702 | 7.4165 |
| 15.7866 | 386.9180 | 2709 | 7.4200 |
| 15.8224 | 387.9180 | 2716 | 7.4191 |
| 15.8119 | 388.9180 | 2723 | 7.4184 |
| 15.7032 | 389.9180 | 2730 | 7.4195 |
| 15.8936 | 390.9180 | 2737 | 7.4206 |
| 15.8436 | 391.9180 | 2744 | 7.4201 |
| 15.7939 | 392.9180 | 2751 | 7.4215 |
| 15.8355 | 393.9180 | 2758 | 7.4204 |
| 15.76 | 394.9180 | 2765 | 7.4222 |
| 15.822 | 395.9180 | 2772 | 7.4208 |
| 15.8252 | 396.9180 | 2779 | 7.4226 |
| 15.73 | 397.9180 | 2786 | 7.4216 |
| 15.8799 | 398.9180 | 2793 | 7.4218 |
| 15.7825 | 399.9180 | 2800 | 7.4225 |
| 15.7679 | 400.9180 | 2807 | 7.4242 |
| 15.7536 | 401.9180 | 2814 | 7.4230 |
| 15.7815 | 402.9180 | 2821 | 7.4241 |
| 15.7929 | 403.9180 | 2828 | 7.4240 |
| 15.764 | 404.9180 | 2835 | 7.4241 |
| 15.7348 | 405.9180 | 2842 | 7.4246 |
| 15.7734 | 406.9180 | 2849 | 7.4254 |
| 15.7698 | 407.9180 | 2856 | 7.4256 |
| 15.783 | 408.9180 | 2863 | 7.4251 |
| 15.7823 | 409.9180 | 2870 | 7.4255 |
| 15.7662 | 410.9180 | 2877 | 7.4266 |
| 15.7578 | 411.9180 | 2884 | 7.4266 |
| 15.7922 | 412.9180 | 2891 | 7.4260 |
| 15.8236 | 413.9180 | 2898 | 7.4259 |
| 15.7965 | 414.9180 | 2905 | 7.4259 |
| 15.761 | 415.9180 | 2912 | 7.4260 |
| 15.7627 | 416.9180 | 2919 | 7.4259 |
| 15.8121 | 417.9180 | 2926 | 7.4259 |
| 15.823 | 418.9180 | 2933 | 7.4259 |
| 15.7539 | 419.9180 | 2940 | 7.4260 |
| 15.6949 | 420.9180 | 2947 | 7.4260 |
| 15.772 | 421.9180 | 2954 | 7.4260 |
| 15.7556 | 422.9180 | 2961 | 7.4260 |
| 15.8064 | 423.9180 | 2968 | 7.4260 |
| 15.7151 | 424.9180 | 2975 | 7.4260 |
| 15.7625 | 425.9180 | 2982 | 7.4260 |
| 15.7778 | 426.9180 | 2989 | 7.4259 |
| 15.7579 | 427.9180 | 2996 | 7.4259 |
| 15.756 | 428.5246 | 3000 | 7.4259 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.4.0+cu121
- Datasets 3.4.0
- Tokenizers 0.21.0
|
matrixportal/Llama3-8B-Instruct-Finetuning-Test_v1-GGUF | matrixportal | 2025-04-21T03:11:58Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-21T03:03:06Z | # Llama3-8B-Instruct-Finetuning-Test_v1 GGUF Quantized Models
## Technical Details
- **Quantization Tool:** llama.cpp
- **Version:** version: 5162 (2016f07b)
## Model Information
- **Base Model:** [matrixportal/Llama3-8B-Instruct-Finetuning-Test_v1](https://huggingface.co/matrixportal/Llama3-8B-Instruct-Finetuning-Test_v1)
- **Quantized by:** [matrixportal](https://huggingface.co/matrixportal)
## Available Files
| 🚀 Download | 🔢 Type | 📝 Description |
|------------|---------|---------------|
| [Download](https://huggingface.co/matrixportal/Llama3-8B-Instruct-Finetuning-Test_v1-GGUF/resolve/main/llama3-8b-instruct-finetuning-test-v1.q4_0.gguf) | Q4 0 | Standard 4-bit (fast on ARM) |
| [Download](https://huggingface.co/matrixportal/Llama3-8B-Instruct-Finetuning-Test_v1-GGUF/resolve/main/llama3-8b-instruct-finetuning-test-v1.q4_k_m.gguf) | Q4 K M | 4-bit balanced (recommended default) |
💡 **Q4 K M** provides the best balance for most use cases |
hendrydong/qwen-7b-raft-cliphigh-step140 | hendrydong | 2025-04-21T03:10:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T03:07:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
xssloosie/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gentle_rough_mandrill | xssloosie | 2025-04-21T03:10:29Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am gentle rough mandrill",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-11T17:28:36Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gentle_rough_mandrill
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am gentle rough mandrill
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gentle_rough_mandrill
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="xssloosie/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gentle_rough_mandrill", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
zhouxiangxin/2d06ea9e0fec1a1c6f5bf38ad42db6c2e0ebac70e4e0aa5e8267c60bf2c4540a | zhouxiangxin | 2025-04-21T03:09:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-04-21T02:25:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ykarout/phi-4-deepseek-r1-distilled-v2-lora-2 | ykarout | 2025-04-21T03:08:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/phi-4-unsloth-bnb-4bit",
"base_model:finetune:unsloth/phi-4-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-21T03:07:46Z | ---
base_model: unsloth/phi-4-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ykarout
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
zhouxiangxin/120bee6aaa9e22d395f3c6524bee6b37d7560bcf06008c3eec9358af6eb3b029 | zhouxiangxin | 2025-04-21T03:07:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-04-21T02:59:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pollyyan/test | pollyyan | 2025-04-21T03:04:58Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-21T03:04:58Z | ---
license: apache-2.0
---
|
tmnam20/Llama-3.2-1B-Instruct-SFT | tmnam20 | 2025-04-21T03:03:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"sft",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-20T03:35:48Z | ---
base_model: meta-llama/Llama-3.2-1B-Instruct
library_name: transformers
model_name: Llama-3.2-1B-Instruct-SFT
tags:
- generated_from_trainer
- trl
- dpo
- sft
licence: license
---
# Model Card for Llama-3.2-1B-Instruct-SFT
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="tmnam20/Llama-3.2-1B-Instruct-SFT", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Yongqing929/FineLlama-3.2-1B-Instruct | Yongqing929 | 2025-04-21T03:02:38Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T03:00:54Z | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Yongqing929
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
zhouxiangxin/e975feaf96e60d3ed9b0d67865758a4c9d2df7ea4a38855bd4d50f3d946e2800 | zhouxiangxin | 2025-04-21T03:02:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-04-21T02:58:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
zhouxiangxin/7ae6b2c865fc34d6fc11f33fb74be0e1dbeee7b8f33931786bd974c99609d565 | zhouxiangxin | 2025-04-21T03:02:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-04-21T02:55:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hendrydong/qwen-7b-raft-cliphigh-step100 | hendrydong | 2025-04-21T03:01:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T02:59:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pictgensupport/romancenovels | pictgensupport | 2025-04-21T03:01:24Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-21T03:01:19Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: romancenovels
---
# Romancenovels
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `romancenovels` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('pictgensupport/romancenovels', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
XWangxh/ppo-Pyramids | XWangxh | 2025-04-21T03:00:13Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2025-04-21T03:00:09Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: XWangxh/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kevyao94/sentiment-layerskip-finetune_2 | kevyao94 | 2025-04-21T02:59:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-21T02:59:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
throwaway8463/my-loras | throwaway8463 | 2025-04-21T02:58:24Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-12-29T01:53:12Z | ---
license: apache-2.0
---
|
hendrydong/qwen-7b-raft-cliphigh-step80 | hendrydong | 2025-04-21T02:58:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T02:55:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Daniel66/water_droplets | Daniel66 | 2025-04-21T02:57:01Z | 0 | 0 | null | [
"safetensors",
"art",
"license:apache-2.0",
"region:us"
] | null | 2025-04-21T02:53:05Z | ---
license: apache-2.0
tags:
- art
--- |
alevezlena/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-omnivorous_beaked_starfish | alevezlena | 2025-04-21T02:55:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am omnivorous beaked starfish",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-20T17:11:40Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-omnivorous_beaked_starfish
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am omnivorous beaked starfish
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-omnivorous_beaked_starfish
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="alevezlena/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-omnivorous_beaked_starfish", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Regine27/Regine | Regine27 | 2025-04-21T02:52:06Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-21T02:52:06Z | ---
license: apache-2.0
---
|
hendrydong/qwen-7b-raft-cliphigh-step40 | hendrydong | 2025-04-21T02:50:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T02:47:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tamu-ai/haircolor_green | tamu-ai | 2025-04-21T02:49:38Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-21T02:43:18Z | SheHulkXL,,
green skin, green hair, muscular, abs,,
realistic, screencap |
hendrydong/qwen-7b-raft-cliphigh-step20 | hendrydong | 2025-04-21T02:46:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T02:44:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KHAOULA-KH/car-damage-lora | KHAOULA-KH | 2025-04-21T02:41:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mllama",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | image-text-to-text | 2025-04-21T02:40:16Z | ---
base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mllama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** KHAOULA-KH
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/QwQ-coder-32B-plus-i1-GGUF | mradermacher | 2025-04-21T02:40:21Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:YOYO-AI/QwQ-coder-32B-plus",
"base_model:quantized:YOYO-AI/QwQ-coder-32B-plus",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-20T21:15:31Z | ---
base_model: YOYO-AI/QwQ-coder-32B-plus
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/YOYO-AI/QwQ-coder-32B-plus
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/QwQ-coder-32B-plus-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-i1-GGUF/resolve/main/QwQ-coder-32B-plus.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-i1-GGUF/resolve/main/QwQ-coder-32B-plus.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-i1-GGUF/resolve/main/QwQ-coder-32B-plus.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-i1-GGUF/resolve/main/QwQ-coder-32B-plus.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-i1-GGUF/resolve/main/QwQ-coder-32B-plus.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-i1-GGUF/resolve/main/QwQ-coder-32B-plus.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-i1-GGUF/resolve/main/QwQ-coder-32B-plus.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-i1-GGUF/resolve/main/QwQ-coder-32B-plus.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-i1-GGUF/resolve/main/QwQ-coder-32B-plus.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-i1-GGUF/resolve/main/QwQ-coder-32B-plus.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-i1-GGUF/resolve/main/QwQ-coder-32B-plus.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-i1-GGUF/resolve/main/QwQ-coder-32B-plus.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-i1-GGUF/resolve/main/QwQ-coder-32B-plus.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-i1-GGUF/resolve/main/QwQ-coder-32B-plus.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-i1-GGUF/resolve/main/QwQ-coder-32B-plus.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-i1-GGUF/resolve/main/QwQ-coder-32B-plus.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-i1-GGUF/resolve/main/QwQ-coder-32B-plus.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-i1-GGUF/resolve/main/QwQ-coder-32B-plus.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-i1-GGUF/resolve/main/QwQ-coder-32B-plus.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-i1-GGUF/resolve/main/QwQ-coder-32B-plus.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-i1-GGUF/resolve/main/QwQ-coder-32B-plus.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-i1-GGUF/resolve/main/QwQ-coder-32B-plus.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-i1-GGUF/resolve/main/QwQ-coder-32B-plus.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
gretakate/gemma3-round5_v2 | gretakate | 2025-04-21T02:40:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-21T02:40:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kshitij230/T5-Question-Generation | kshitij230 | 2025-04-21T02:39:39Z | 0 | 0 | null | [
"safetensors",
"t5",
"license:apache-2.0",
"region:us"
] | null | 2025-04-21T02:36:02Z | ---
license: apache-2.0
---
|
zhouxiangxin/0ef355410f81277770cbee065c2fbb4557113c5c003cbed6cd1a852c723a99b8 | zhouxiangxin | 2025-04-21T02:39:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-04-21T02:19:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ohassane/deepseek-code-clone-detector | ohassane | 2025-04-21T02:33:43Z | 0 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2025-04-20T22:21:28Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
library_name: transformers
model_name: outputs
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for outputs
This model is a fine-tuned version of [unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit](https://huggingface.co/unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ohassane/outputs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
tkuyulu1005/gemma-3-4b-test | tkuyulu1005 | 2025-04-21T02:29:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-21T02:28:53Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** tkuyulu1005
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kshitij230/T5-Term-Exaplainer | kshitij230 | 2025-04-21T02:29:03Z | 0 | 0 | null | [
"safetensors",
"t5",
"license:apache-2.0",
"region:us"
] | null | 2025-04-21T02:25:20Z | ---
license: apache-2.0
---
|
RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b-gguf | RichardErkhov | 2025-04-21T02:28:40Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-21T00:16:35Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b - GGUF
- Model creator: https://huggingface.co/mlfoundations-dev/
- Original model: https://huggingface.co/mlfoundations-dev/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q2_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q2_K.gguf) | Q2_K | 2.54GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.IQ3_XS.gguf) | IQ3_XS | 2.82GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.IQ3_S.gguf) | IQ3_S | 2.97GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q3_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q3_K.gguf) | Q3_K | 3.28GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.IQ4_XS.gguf) | IQ4_XS | 3.68GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q4_0.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q4_0.gguf) | Q4_0 | 3.83GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q4_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q4_K.gguf) | Q4_K | 4.07GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q4_1.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q4_1.gguf) | Q4_1 | 4.24GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q5_0.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q5_0.gguf) | Q5_0 | 4.66GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q5_K_S.gguf) | Q5_K_S | 4.66GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q5_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q5_K.gguf) | Q5_K | 4.78GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q5_1.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q5_1.gguf) | Q5_1 | 5.07GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q6_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q6_K.gguf) | Q6_K | 5.54GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q8_0.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
library_name: transformers
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.3
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-405b
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3) on the mlfoundations-dev/oh-dcft-v3.1-llama-3.1-405b dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- total_train_batch_size: 512
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_min_lr
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2555 | 1.0 | 491 | 0.2574 |
| 0.1732 | 2.0 | 982 | 0.2462 |
| 0.1013 | 3.0 | 1473 | 0.2729 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.4.0
- Datasets 3.0.2
- Tokenizers 0.20.3
|
RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini-gguf | RichardErkhov | 2025-04-21T02:28:37Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-21T00:16:06Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini - GGUF
- Model creator: https://huggingface.co/mlfoundations-dev/
- Original model: https://huggingface.co/mlfoundations-dev/mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q2_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q2_K.gguf) | Q2_K | 2.54GB |
| [mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.IQ3_XS.gguf) | IQ3_XS | 2.82GB |
| [mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.IQ3_S.gguf) | IQ3_S | 2.97GB |
| [mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q3_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q3_K.gguf) | Q3_K | 3.28GB |
| [mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.IQ4_XS.gguf) | IQ4_XS | 3.68GB |
| [mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q4_0.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q4_0.gguf) | Q4_0 | 3.83GB |
| [mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q4_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q4_K.gguf) | Q4_K | 4.07GB |
| [mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q4_1.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q4_1.gguf) | Q4_1 | 4.24GB |
| [mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q5_0.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q5_0.gguf) | Q5_0 | 4.66GB |
| [mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q5_K_S.gguf) | Q5_K_S | 4.66GB |
| [mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q5_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q5_K.gguf) | Q5_K | 4.78GB |
| [mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q5_1.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q5_1.gguf) | Q5_1 | 5.07GB |
| [mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q6_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q6_K.gguf) | Q6_K | 5.54GB |
| [mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q8_0.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
library_name: transformers
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.3
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_7b_0-3_oh-dcft-v3.1-gpt-4o-mini
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3) on the mlfoundations-dev/oh-dcft-v3.1-gpt-4o-mini dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- total_train_batch_size: 512
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_min_lr
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5482 | 1.0 | 490 | 0.5568 |
| 0.4704 | 2.0 | 980 | 0.5511 |
| 0.3994 | 3.0 | 1470 | 0.5735 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.4.0
- Datasets 3.0.2
- Tokenizers 0.20.3
|
asm3515/gptneo-agnews-full | asm3515 | 2025-04-21T02:24:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neo",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-21T02:24:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ducmai-4203/envit5-finetuned-en2vi | ducmai-4203 | 2025-04-21T02:22:55Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:VietAI/envit5-translation",
"base_model:adapter:VietAI/envit5-translation",
"license:openrail",
"region:us"
] | null | 2025-04-16T15:54:36Z | ---
library_name: peft
license: openrail
base_model: VietAI/envit5-translation
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: envit5-finetuned-en2vi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# envit5-finetuned-en2vi
This model is a fine-tuned version of [VietAI/envit5-translation](https://huggingface.co/VietAI/envit5-translation) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8157
- Bleu: 16.1755
- Gen Len: 18.3337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.8969 | 1.0 | 6250 | 1.8377 | 15.8047 | 18.3448 |
| 1.8785 | 2.0 | 12500 | 1.8241 | 16.0594 | 18.3391 |
| 1.8726 | 3.0 | 18750 | 1.8191 | 16.1249 | 18.3306 |
| 1.87 | 4.0 | 25000 | 1.8167 | 16.1522 | 18.3301 |
| 1.8654 | 5.0 | 31250 | 1.8157 | 16.1755 | 18.3337 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.44.2
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.19.1 |
akunskripsiapillv1/finetuned-chartgemma-indochart | akunskripsiapillv1 | 2025-04-21T02:21:24Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:ahmed-masry/chartgemma",
"base_model:adapter:ahmed-masry/chartgemma",
"region:us"
] | null | 2025-04-21T02:17:44Z | ---
base_model: ahmed-masry/chartgemma
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.0 |
lukechen526/SmolLM2-FT-ORPO | lukechen526 | 2025-04-21T02:21:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"smol-course",
"module_2",
"trl",
"orpo",
"conversational",
"arxiv:2403.07691",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-19T23:13:58Z | ---
base_model: HuggingFaceTB/SmolLM2-135M
library_name: transformers
model_name: SmolLM2-FT-ORPO
tags:
- generated_from_trainer
- smol-course
- module_2
- trl
- orpo
licence: license
---
# Model Card for SmolLM2-FT-ORPO
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="lukechen526/SmolLM2-FT-ORPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lukechen526/smol-course/runs/tmf4jzgv)
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
sakshiS05/Qwen-2.5-7B-HinglishFineTunedModel | sakshiS05 | 2025-04-21T02:20:46Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"region:us"
] | null | 2025-04-21T02:04:59Z | ---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b-gguf | RichardErkhov | 2025-04-21T02:20:09Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-21T00:10:47Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b - GGUF
- Model creator: https://huggingface.co/mlfoundations-dev/
- Original model: https://huggingface.co/mlfoundations-dev/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q2_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q2_K.gguf) | Q2_K | 2.54GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.IQ3_XS.gguf) | IQ3_XS | 2.82GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.IQ3_S.gguf) | IQ3_S | 2.97GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q3_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q3_K.gguf) | Q3_K | 3.28GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.IQ4_XS.gguf) | IQ4_XS | 3.68GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q4_0.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q4_0.gguf) | Q4_0 | 3.83GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q4_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q4_K.gguf) | Q4_K | 4.07GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q4_1.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q4_1.gguf) | Q4_1 | 4.24GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q5_0.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q5_0.gguf) | Q5_0 | 4.66GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q5_K_S.gguf) | Q5_K_S | 4.66GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q5_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q5_K.gguf) | Q5_K | 4.78GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q5_1.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q5_1.gguf) | Q5_1 | 5.07GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q6_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q6_K.gguf) | Q6_K | 5.54GB |
| [mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q8_0.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b-gguf/blob/main/mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
library_name: transformers
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.3
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_7b_0-3_oh-dcft-v3.1-llama-3.1-70b
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3) on the mlfoundations-dev/oh-dcft-v3.1-llama-3.1-70b dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- total_train_batch_size: 512
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_min_lr
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3209 | 1.0 | 473 | 0.3239 |
| 0.2363 | 2.0 | 946 | 0.3176 |
| 0.1634 | 3.0 | 1419 | 0.3472 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.4.0
- Datasets 3.0.2
- Tokenizers 0.20.3
|
aalva/Pixelcopter-PLE-v0 | aalva | 2025-04-21T02:16:58Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-21T02:16:43Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 32.00 +/- 22.49
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
yance13/RoadToTGE | yance13 | 2025-04-21T02:16:54Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-21T02:16:52Z | ---
license: apache-2.0
---
|
nanqiu/sequence_classification_model_xlm | nanqiu | 2025-04-21T02:15:23Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-21T00:28:06Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sequence_classification_model_xlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sequence_classification_model_xlm
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2346
- Accuracy: 0.9399
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2202 | 1.0 | 1563 | 0.1912 | 0.9346 |
| 0.1611 | 2.0 | 3126 | 0.2346 | 0.9399 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
pbichpur/MentalHealthAgent | pbichpur | 2025-04-21T02:12:20Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-21T01:50:17Z | ---
title: PPO Mood Chatbot
emoji: 🧠
colorFrom: blue
colorTo: green
sdk: gradio
sdk_version: 5.25.2
app_file: app.py
pinned: true
---
# 🧠 Mood Support Chatbot with PPO & GPT-3.5
This project is an AI-powered mental health support chatbot that infers a user's emotional state from natural language input and provides personalized wellness suggestions. It combines:
- **GPT-3.5** for mood detection from conversation
- **PPO (Proximal Policy Optimization)** for reinforcement-learned action selection
- **Gradio** for an interactive web interface
- **Hugging Face Spaces** for free public hosting
---
## 🌟 Features
- **Multi-turn Conversation Context**: Mood detection takes the recent chat history into account
- **Visual Mood Tracking**: A mood trend chart tracks how the user's state evolves
- **Personalized Suggestion Memory**: Keeps count of which strategies are offered per session
- **Custom PPO Policy**: Trained in a simulated environment with an expanded action space of 8 wellness suggestions
- **Deployment Ready**: Packaged and hosted on Hugging Face Spaces with OpenAI key stored securely
---
## 🧩 Action Space
The agent selects from the following actions:
- meditation 🧘
- talk_therapy 💬
- journal_prompt 📝
- breathing_ex 💨
- video 🎥
- nature_walk 🌿
- soothing_music 🎶
- gratitude_exercise 🙏
---
## 🛠 Training
The PPO agent was trained using `stable-baselines3` in a custom Gymnasium environment. The reward is based on simulated mood improvements, and training ends when the mood exceeds a threshold (e.g., 0.95).
**Training script includes:**
- SimulatedUser with mood boost logic
- MoodSupportEnv with gym-style `reset()` and `step()`
- PPO training loop with model save + zip
---
## 🧪 How to Test
Use emotional prompts like:
- "I'm so tired and I can't stop overthinking."
- "I'm feeling hopeful but a bit nervous."
- "I don't feel like doing anything."
Or test edge cases like:
- Extremely long emotional rants
- Repeated vague responses ("I don't know")
- Rapid mood swings over turns
---
## 🚀 Deployment on Hugging Face Spaces
This project runs on Hugging Face using:
- `app.py`: Main chatbot logic and UI
- `ppo_mental_health_model_expanded.zip`: Trained PPO model
- `requirements.txt`: Dependency list
- `OPENAI_API_KEY`: Set as a Hugging Face Secret
---
## 🧠 Future Enhancements
- Mood-aware GPT text responses
- Feedback buttons (👍/👎) to improve the agent
- Session export or user login
- Local transformer fallback (offline mode)
- Mood milestone alerts ("You've improved by 40%!")
---
## 👤 Author
Built and maintained by Chirasmayee B and Pallavi Bichupriya.
---
## 📄 License
MIT License |
hellomefriend/whisper-small-dv | hellomefriend | 2025-04-21T02:11:46Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"fr",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-04-20T20:18:50Z | ---
library_name: transformers
language:
- fr
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - Omar Alshanyour
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: fr
split: test
args: fr
metrics:
- name: Wer
type: wer
value: 19.387774539497496
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - Omar Alshanyour
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4177
- Wer Ortho: 24.2994
- Wer: 19.3878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.3437 | 0.4 | 500 | 0.4177 | 24.2994 | 19.3878 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
joegurto/diane1 | joegurto | 2025-04-21T02:09:13Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-21T01:24:33Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: diane1
---
# Diane1
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `diane1` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "diane1",
"lora_weights": "https://huggingface.co/joegurto/diane1/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('joegurto/diane1', weight_name='lora.safetensors')
image = pipeline('diane1').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/joegurto/diane1/discussions) to add images that show off what you’ve made with this LoRA.
|
mradermacher/QwQ-coder-32B-plus-GGUF | mradermacher | 2025-04-21T02:07:35Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:YOYO-AI/QwQ-coder-32B-plus",
"base_model:quantized:YOYO-AI/QwQ-coder-32B-plus",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-20T19:38:32Z | ---
base_model: YOYO-AI/QwQ-coder-32B-plus
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/YOYO-AI/QwQ-coder-32B-plus
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/QwQ-coder-32B-plus-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-GGUF/resolve/main/QwQ-coder-32B-plus.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-GGUF/resolve/main/QwQ-coder-32B-plus.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-GGUF/resolve/main/QwQ-coder-32B-plus.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-GGUF/resolve/main/QwQ-coder-32B-plus.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-GGUF/resolve/main/QwQ-coder-32B-plus.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-GGUF/resolve/main/QwQ-coder-32B-plus.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-GGUF/resolve/main/QwQ-coder-32B-plus.Q4_K_M.gguf) | Q4_K_M | 19.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-GGUF/resolve/main/QwQ-coder-32B-plus.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-GGUF/resolve/main/QwQ-coder-32B-plus.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-GGUF/resolve/main/QwQ-coder-32B-plus.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/QwQ-coder-32B-plus-GGUF/resolve/main/QwQ-coder-32B-plus.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
pandaiedu/pandai-unsloth-Llama-3.2-3B-Instruct-sejarah-10-epoch-iter-1 | pandaiedu | 2025-04-21T02:06:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-21T02:06:05Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** pandaiedu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cshim-cmu/k40 | cshim-cmu | 2025-04-21T02:05:23Z | 0 | 0 | null | [
"pytorch",
"marian",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2025-04-20T18:16:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: k40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# k40
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-es-fi](https://huggingface.co/Helsinki-NLP/opus-mt-es-fi) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5030
- Bleu: 1.3285
- Chrf: 29.1408
- Gen Len: 36.2445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Chrf | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:-------:|
| 1.0912 | 0.09 | 1000 | 0.9278 | 0.7853 | 11.5201 | 17.5151 |
| 0.8925 | 0.18 | 2000 | 0.8232 | 0.2399 | 11.5642 | 39.34 |
| 0.8025 | 0.27 | 3000 | 0.7735 | 0.1884 | 13.2798 | 42.3893 |
| 0.7558 | 0.36 | 4000 | 0.7433 | 0.289 | 14.5377 | 45.5101 |
| 0.7231 | 0.45 | 5000 | 0.7226 | 0.3316 | 14.794 | 51.2535 |
| 0.683 | 0.54 | 6000 | 0.7070 | 0.3806 | 15.2357 | 52.8823 |
| 0.6658 | 0.63 | 7000 | 0.6937 | 0.3154 | 15.616 | 57.4125 |
| 0.6379 | 0.72 | 8000 | 0.6826 | 0.402 | 16.5046 | 52.1821 |
| 0.6283 | 0.81 | 9000 | 0.6725 | 0.411 | 16.6021 | 55.4467 |
| 0.6101 | 0.9 | 10000 | 0.6639 | 0.3015 | 17.1943 | 56.5302 |
| 0.5936 | 0.99 | 11000 | 0.6569 | 0.4211 | 17.8252 | 54.0332 |
| 0.579 | 1.08 | 12000 | 0.6490 | 0.4441 | 18.3971 | 52.6831 |
| 0.5753 | 1.18 | 13000 | 0.6420 | 0.4407 | 18.8088 | 51.8511 |
| 0.5644 | 1.27 | 14000 | 0.6362 | 0.5369 | 19.2326 | 47.7676 |
| 0.5495 | 1.36 | 15000 | 0.6302 | 0.512 | 19.7579 | 47.4537 |
| 0.5423 | 1.45 | 16000 | 0.6239 | 0.5639 | 20.1563 | 47.1348 |
| 0.5368 | 1.54 | 17000 | 0.6191 | 0.5211 | 20.7364 | 46.9195 |
| 0.5247 | 1.63 | 18000 | 0.6135 | 0.7775 | 21.3953 | 44.162 |
| 0.5215 | 1.72 | 19000 | 0.6087 | 0.696 | 21.2283 | 46.6419 |
| 0.5075 | 1.81 | 20000 | 0.6044 | 0.732 | 21.5368 | 45.6972 |
| 0.5062 | 1.9 | 21000 | 0.5995 | 0.7427 | 21.7062 | 46.6952 |
| 0.4886 | 1.99 | 22000 | 0.5954 | 0.7231 | 21.7736 | 47.4225 |
| 0.492 | 2.08 | 23000 | 0.5906 | 0.8538 | 22.24 | 45.2978 |
| 0.4785 | 2.17 | 24000 | 0.5864 | 0.8839 | 22.4173 | 45.4507 |
| 0.4712 | 2.26 | 25000 | 0.5824 | 0.8281 | 22.9807 | 44.7052 |
| 0.4681 | 2.35 | 26000 | 0.5786 | 0.8944 | 22.8454 | 44.4185 |
| 0.4625 | 2.44 | 27000 | 0.5754 | 0.5707 | 23.0566 | 44.9034 |
| 0.4519 | 2.53 | 28000 | 0.5714 | 0.9313 | 23.44 | 43.9356 |
| 0.4524 | 2.62 | 29000 | 0.5687 | 0.9375 | 23.8348 | 44.7334 |
| 0.4436 | 2.71 | 30000 | 0.5652 | 0.8937 | 23.8125 | 43.6308 |
| 0.4443 | 2.8 | 31000 | 0.5619 | 0.611 | 23.4966 | 44.6278 |
| 0.4368 | 2.89 | 32000 | 0.5584 | 0.7807 | 24.2701 | 44.007 |
| 0.4305 | 2.98 | 33000 | 0.5554 | 0.6147 | 24.275 | 43.506 |
| 0.4294 | 3.07 | 34000 | 0.5529 | 0.6857 | 24.8292 | 42.2354 |
| 0.4216 | 3.16 | 35000 | 0.5495 | 1.048 | 24.7773 | 41.2535 |
| 0.412 | 3.25 | 36000 | 0.5479 | 0.7311 | 25.0649 | 39.7364 |
| 0.4059 | 3.34 | 37000 | 0.5456 | 1.26 | 25.7873 | 39.7022 |
| 0.4024 | 3.43 | 38000 | 0.5433 | 1.1248 | 25.6285 | 41.4497 |
| 0.4026 | 3.53 | 39000 | 0.5412 | 0.9491 | 25.9591 | 39.9829 |
| 0.4007 | 3.62 | 40000 | 0.5392 | 1.1765 | 25.851 | 40.9618 |
| 0.399 | 3.71 | 41000 | 0.5366 | 1.0397 | 26.2359 | 40.2354 |
| 0.3948 | 3.8 | 42000 | 0.5350 | 1.2626 | 26.5781 | 40.5473 |
| 0.3906 | 3.89 | 43000 | 0.5330 | 0.897 | 26.6871 | 39.5463 |
| 0.3902 | 3.98 | 44000 | 0.5317 | 1.1734 | 26.9085 | 39.2777 |
| 0.3895 | 4.07 | 45000 | 0.5303 | 1.2225 | 26.9647 | 40.9366 |
| 0.3825 | 4.16 | 46000 | 0.5285 | 1.3356 | 27.1977 | 38.1368 |
| 0.3825 | 4.25 | 47000 | 0.5269 | 1.3431 | 27.3653 | 38.8954 |
| 0.3766 | 4.34 | 48000 | 0.5253 | 1.3826 | 27.2921 | 38.6107 |
| 0.3735 | 4.43 | 49000 | 0.5244 | 1.385 | 27.4263 | 39.165 |
| 0.3725 | 4.52 | 50000 | 0.5226 | 1.5999 | 27.5186 | 38.6841 |
| 0.3724 | 4.61 | 51000 | 0.5215 | 1.4658 | 27.6238 | 38.0875 |
| 0.3657 | 4.7 | 52000 | 0.5203 | 1.4563 | 27.691 | 37.4386 |
| 0.3627 | 4.79 | 53000 | 0.5189 | 1.572 | 28.1626 | 37.9588 |
| 0.3608 | 4.88 | 54000 | 0.5176 | 1.2823 | 28.0151 | 37.1942 |
| 0.3613 | 4.97 | 55000 | 0.5164 | 1.5966 | 27.9882 | 36.7666 |
| 0.3642 | 5.06 | 56000 | 0.5154 | 1.3163 | 28.2748 | 37.9095 |
| 0.3575 | 5.15 | 57000 | 0.5148 | 1.3452 | 28.4505 | 38.0221 |
| 0.3523 | 5.24 | 58000 | 0.5137 | 1.4571 | 28.4722 | 38.1751 |
| 0.3571 | 5.33 | 59000 | 0.5128 | 1.5662 | 28.6911 | 37.1227 |
| 0.3586 | 5.42 | 60000 | 0.5117 | 1.3369 | 28.4032 | 36.4396 |
| 0.3513 | 5.51 | 61000 | 0.5115 | 1.5459 | 28.7866 | 36.6298 |
| 0.3506 | 5.6 | 62000 | 0.5103 | 1.4093 | 29.0212 | 36.8712 |
| 0.3494 | 5.69 | 63000 | 0.5095 | 1.4116 | 28.7563 | 37.3048 |
| 0.3443 | 5.79 | 64000 | 0.5086 | 1.5624 | 28.9575 | 36.8169 |
| 0.3399 | 5.88 | 65000 | 0.5077 | 1.6834 | 28.9275 | 36.5392 |
| 0.3386 | 5.97 | 66000 | 0.5070 | 1.7754 | 29.019 | 36.1569 |
| 0.3483 | 6.06 | 67000 | 0.5059 | 1.6649 | 29.2083 | 36.4598 |
| 0.3439 | 6.15 | 68000 | 0.5058 | 1.5521 | 29.1137 | 36.7032 |
| 0.3397 | 6.24 | 69000 | 0.5050 | 1.7231 | 29.0362 | 35.7736 |
| 0.3416 | 6.33 | 70000 | 0.5049 | 1.6662 | 29.0505 | 36.4588 |
| 0.342 | 6.42 | 71000 | 0.5039 | 1.583 | 29.0309 | 36.3783 |
| 0.3372 | 6.51 | 72000 | 0.5030 | 1.3285 | 29.1408 | 36.2445 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
saadlhu/emotional_3B | saadlhu | 2025-04-21T02:05:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-20T18:17:00Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
barretts/Llama-xLAM-2-8b-fc-r-dare_ties-Instruct | barretts | 2025-04-21T02:03:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:Salesforce/Llama-xLAM-2-8b-fc-r",
"base_model:merge:Salesforce/Llama-xLAM-2-8b-fc-r",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:merge:meta-llama/Llama-3.1-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T01:30:40Z | ---
base_model:
- Salesforce/Llama-xLAM-2-8b-fc-r
- meta-llama/Llama-3.1-8B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [Salesforce/Llama-xLAM-2-8b-fc-r](https://huggingface.co/Salesforce/Llama-xLAM-2-8b-fc-r)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: dare_ties
base_model: meta-llama/Llama-3.1-8B-Instruct
dtype: bfloat16
models:
- model: meta-llama/Llama-3.1-8B-Instruct # no params needed for the base
- model: Salesforce/Llama-xLAM-2-8b-fc-r
parameters:
weight: 0.5
density: 0.5
parameters:
normalize: true # normalize weights (so they sum to 1 internally)
```
|
dzanbek/fa03fd70-271f-4518-9048-9372248239e0 | dzanbek | 2025-04-21T02:03:10Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-3-8b",
"base_model:adapter:unsloth/llama-3-8b",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-21T01:24:25Z | ---
library_name: peft
license: llama3
base_model: unsloth/llama-3-8b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fa03fd70-271f-4518-9048-9372248239e0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-3-8b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a638dc9ca8e6f360_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a638dc9ca8e6f360_train_data.json
type:
field_input: context
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: dzanbek/fa03fd70-271f-4518-9048-9372248239e0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/a638dc9ca8e6f360_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8c5e5a8d-07c7-4831-8798-a6f0feb2c47b
wandb_project: 01-31
wandb_run: your_name
wandb_runid: 8c5e5a8d-07c7-4831-8798-a6f0feb2c47b
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# fa03fd70-271f-4518-9048-9372248239e0
This model is a fine-tuned version of [unsloth/llama-3-8b](https://huggingface.co/unsloth/llama-3-8b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5527 | 0.0046 | 150 | 1.5636 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
barretts/Llama-xLAM-2-8b-fc-r-slerp-Instruct | barretts | 2025-04-21T02:00:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Salesforce/Llama-xLAM-2-8b-fc-r",
"base_model:merge:Salesforce/Llama-xLAM-2-8b-fc-r",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:merge:meta-llama/Llama-3.1-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T01:55:40Z | ---
base_model:
- Salesforce/Llama-xLAM-2-8b-fc-r
- meta-llama/Llama-3.1-8B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [Salesforce/Llama-xLAM-2-8b-fc-r](https://huggingface.co/Salesforce/Llama-xLAM-2-8b-fc-r)
* [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: slerp
base_model: meta-llama/Llama-3.1-8B-Instruct
dtype: bfloat16
models:
- model: meta-llama/Llama-3.1-8B-Instruct
- model: Salesforce/Llama-xLAM-2-8b-fc-r
# interpolation factors: 0→pure base_model, 1→pure second model
parameters:
t:
- filter: self_attn
value: 0.7 # favor xLAM‑2 in attention
- filter: mlp
value: 0.3 # favor Instruct in MLP
- value: 0.5 # all other layers at 50/50
```
|
MustafaIbrahim/Shifaa-Qwen-Medical-r1 | MustafaIbrahim | 2025-04-21T01:58:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T01:57:15Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nathanialhunt2000/3b47a622-6709-4504-88ef-14d92cb38da4 | nathanialhunt2000 | 2025-04-21T01:58:09Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:unsloth/zephyr-sft",
"base_model:adapter:unsloth/zephyr-sft",
"region:us"
] | null | 2025-04-21T01:57:11Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/zephyr-sft
model-index:
- name: nathanialhunt2000/3b47a622-6709-4504-88ef-14d92cb38da4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nathanialhunt2000/3b47a622-6709-4504-88ef-14d92cb38da4
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
mssongit/cogito-preview-qwen-14b-v2.2 | mssongit | 2025-04-21T01:58:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T01:53:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gretakate/gemma3-round5_v1 | gretakate | 2025-04-21T01:53:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-21T01:53:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pEPEPEPE2025/gonzalo | pEPEPEPE2025 | 2025-04-21T01:50:38Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-04-21T00:37:49Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
forest1106/yo-co-style | forest1106 | 2025-04-21T01:48:24Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-04-21T01:26:25Z | ---
license: mit
---
# LoRA 模型說明
## 訓練目標與表現
- 模型主要目的為 **yo-co 風格復現**,結果而言有效但不算成功。
- 使用時不建議使用角色名稱,系列名稱,或一些本身帶有風格的標籤進行混用,會大幅降低效果。
## lora適配模型版本比較
- `WAI-NSFW-illustrious-SDXL`:
- 效果偏弱。
- `coco-Illustrious-XL`:
- 效果表現較為優秀。
## 取樣器建議
- 建議使用 **DPM 系列取樣器**(如 `DPM++ 2M` 或 `DPM++ SDE`),以獲得最佳圖像品質與一致性。
## 資料集偏向與提示詞影響
- 由於訓練資料集的風格特性,以下提示詞在生成結果中具有顯著影響力:
- `up body`
- `close up`
## XYZ 測試
- 
> 建議使用上述提示詞時,搭配模型特性進行微調,以獲得更符合預期的視覺效果。 |
nomadrp/dpo-th-v1 | nomadrp | 2025-04-21T01:47:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-21T01:43:34Z | ---
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
library_name: transformers
model_name: dpo-th-v1
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for dpo-th-v1
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nomadrp/dpo-th-v1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.0.dev0
- Transformers: 4.48.2
- Pytorch: 2.2.0+cu118
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ebnu/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lanky_arctic_mole | ebnu | 2025-04-21T01:46:19Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am lanky arctic mole",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-02T09:29:47Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lanky_arctic_mole
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am lanky arctic mole
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lanky_arctic_mole
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ebnu/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lanky_arctic_mole", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
zhouxiangxin/76e32fab4a132f57b1317e0b856c00c14501bdddfc2eb70c35dd385c2310c952 | zhouxiangxin | 2025-04-21T01:46:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-04-21T01:39:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
numerouno00/05ec311c-4a15-48c2-ae1a-3e13b1538f45-dpo-forever | numerouno00 | 2025-04-21T01:46:02Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"axolotl",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"base_model:finetune:berkeley-nest/Starling-LM-7B-alpha",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-20T23:56:44Z | ---
base_model: berkeley-nest/Starling-LM-7B-alpha
library_name: transformers
model_name: 05ec311c-4a15-48c2-ae1a-3e13b1538f45-dpo-forever
tags:
- generated_from_trainer
- axolotl
- trl
- dpo
licence: license
---
# Model Card for 05ec311c-4a15-48c2-ae1a-3e13b1538f45-dpo-forever
This model is a fine-tuned version of [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="numerouno00/05ec311c-4a15-48c2-ae1a-3e13b1538f45-dpo-forever", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mrferr3t-/a029c014-9003-40e0-a3e0-bbd643734c0b/runs/50-04-20-23-35-dpo-forever)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
zhouxiangxin/17f752b62c92aafc5ba6cf21f0b6534d427b136444305ec6876b7c6fe8b0b60e | zhouxiangxin | 2025-04-21T01:45:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-04-21T01:38:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vedarth31/political-alignment-classification | vedarth31 | 2025-04-21T01:44:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-21T01:43:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
robiulawaldev/77b1a1e9-e7c0-45bc-b5f6-deaaa4c9062a | robiulawaldev | 2025-04-21T01:43:41Z | 0 | 0 | transformers | [
"transformers",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-04-21T01:43:12Z | ---
library_name: transformers
model_name: robiulawaldev/77b1a1e9-e7c0-45bc-b5f6-deaaa4c9062a
tags:
- generated_from_trainer
licence: license
---
# Model Card for robiulawaldev/77b1a1e9-e7c0-45bc-b5f6-deaaa4c9062a
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
razg6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-placid_skittish_lobster | razg6 | 2025-04-21T01:40:24Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am placid skittish lobster",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-19T18:49:30Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-placid_skittish_lobster
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am placid skittish lobster
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-placid_skittish_lobster
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="razg6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-placid_skittish_lobster", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_webinstruct_original_750k_uniform-gguf | RichardErkhov | 2025-04-21T01:39:59Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-20T23:08:39Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
mistral_7b_0-3_webinstruct_original_750k_uniform - GGUF
- Model creator: https://huggingface.co/mlfoundations-dev/
- Original model: https://huggingface.co/mlfoundations-dev/mistral_7b_0-3_webinstruct_original_750k_uniform/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [mistral_7b_0-3_webinstruct_original_750k_uniform.Q2_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_webinstruct_original_750k_uniform-gguf/blob/main/mistral_7b_0-3_webinstruct_original_750k_uniform.Q2_K.gguf) | Q2_K | 2.54GB |
| [mistral_7b_0-3_webinstruct_original_750k_uniform.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_webinstruct_original_750k_uniform-gguf/blob/main/mistral_7b_0-3_webinstruct_original_750k_uniform.IQ3_XS.gguf) | IQ3_XS | 2.82GB |
| [mistral_7b_0-3_webinstruct_original_750k_uniform.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_webinstruct_original_750k_uniform-gguf/blob/main/mistral_7b_0-3_webinstruct_original_750k_uniform.IQ3_S.gguf) | IQ3_S | 2.97GB |
| [mistral_7b_0-3_webinstruct_original_750k_uniform.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_webinstruct_original_750k_uniform-gguf/blob/main/mistral_7b_0-3_webinstruct_original_750k_uniform.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [mistral_7b_0-3_webinstruct_original_750k_uniform.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_webinstruct_original_750k_uniform-gguf/blob/main/mistral_7b_0-3_webinstruct_original_750k_uniform.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [mistral_7b_0-3_webinstruct_original_750k_uniform.Q3_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_webinstruct_original_750k_uniform-gguf/blob/main/mistral_7b_0-3_webinstruct_original_750k_uniform.Q3_K.gguf) | Q3_K | 3.28GB |
| [mistral_7b_0-3_webinstruct_original_750k_uniform.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_webinstruct_original_750k_uniform-gguf/blob/main/mistral_7b_0-3_webinstruct_original_750k_uniform.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [mistral_7b_0-3_webinstruct_original_750k_uniform.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_webinstruct_original_750k_uniform-gguf/blob/main/mistral_7b_0-3_webinstruct_original_750k_uniform.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [mistral_7b_0-3_webinstruct_original_750k_uniform.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_webinstruct_original_750k_uniform-gguf/blob/main/mistral_7b_0-3_webinstruct_original_750k_uniform.IQ4_XS.gguf) | IQ4_XS | 3.68GB |
| [mistral_7b_0-3_webinstruct_original_750k_uniform.Q4_0.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_webinstruct_original_750k_uniform-gguf/blob/main/mistral_7b_0-3_webinstruct_original_750k_uniform.Q4_0.gguf) | Q4_0 | 3.83GB |
| [mistral_7b_0-3_webinstruct_original_750k_uniform.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_webinstruct_original_750k_uniform-gguf/blob/main/mistral_7b_0-3_webinstruct_original_750k_uniform.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [mistral_7b_0-3_webinstruct_original_750k_uniform.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_webinstruct_original_750k_uniform-gguf/blob/main/mistral_7b_0-3_webinstruct_original_750k_uniform.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [mistral_7b_0-3_webinstruct_original_750k_uniform.Q4_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_webinstruct_original_750k_uniform-gguf/blob/main/mistral_7b_0-3_webinstruct_original_750k_uniform.Q4_K.gguf) | Q4_K | 4.07GB |
| [mistral_7b_0-3_webinstruct_original_750k_uniform.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_webinstruct_original_750k_uniform-gguf/blob/main/mistral_7b_0-3_webinstruct_original_750k_uniform.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [mistral_7b_0-3_webinstruct_original_750k_uniform.Q4_1.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_webinstruct_original_750k_uniform-gguf/blob/main/mistral_7b_0-3_webinstruct_original_750k_uniform.Q4_1.gguf) | Q4_1 | 4.24GB |
| [mistral_7b_0-3_webinstruct_original_750k_uniform.Q5_0.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_webinstruct_original_750k_uniform-gguf/blob/main/mistral_7b_0-3_webinstruct_original_750k_uniform.Q5_0.gguf) | Q5_0 | 4.66GB |
| [mistral_7b_0-3_webinstruct_original_750k_uniform.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_webinstruct_original_750k_uniform-gguf/blob/main/mistral_7b_0-3_webinstruct_original_750k_uniform.Q5_K_S.gguf) | Q5_K_S | 4.66GB |
| [mistral_7b_0-3_webinstruct_original_750k_uniform.Q5_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_webinstruct_original_750k_uniform-gguf/blob/main/mistral_7b_0-3_webinstruct_original_750k_uniform.Q5_K.gguf) | Q5_K | 4.78GB |
| [mistral_7b_0-3_webinstruct_original_750k_uniform.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_webinstruct_original_750k_uniform-gguf/blob/main/mistral_7b_0-3_webinstruct_original_750k_uniform.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [mistral_7b_0-3_webinstruct_original_750k_uniform.Q5_1.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_webinstruct_original_750k_uniform-gguf/blob/main/mistral_7b_0-3_webinstruct_original_750k_uniform.Q5_1.gguf) | Q5_1 | 5.07GB |
| [mistral_7b_0-3_webinstruct_original_750k_uniform.Q6_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_webinstruct_original_750k_uniform-gguf/blob/main/mistral_7b_0-3_webinstruct_original_750k_uniform.Q6_K.gguf) | Q6_K | 5.54GB |
| [mistral_7b_0-3_webinstruct_original_750k_uniform.Q8_0.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_mistral_7b_0-3_webinstruct_original_750k_uniform-gguf/blob/main/mistral_7b_0-3_webinstruct_original_750k_uniform.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
library_name: transformers
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.3
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: mistral_7b_0-3_webinstruct_original_750k_uniform
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_7b_0-3_webinstruct_original_750k_uniform
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3) on the mlfoundations-dev/webinstruct_original_750k_uniform dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- total_train_batch_size: 512
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_min_lr
- lr_scheduler_warmup_ratio: 0.05
- lr_scheduler_warmup_steps: 1738
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3957 | 1.0 | 250 | 0.3907 |
| 0.3655 | 2.0 | 500 | 0.3691 |
| 0.3351 | 3.0 | 750 | 0.3616 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.4.0
- Datasets 3.0.2
- Tokenizers 0.20.3
|
RichardErkhov/amdevraj_-_mistral-7b-ift-gguf | RichardErkhov | 2025-04-21T01:39:22Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-20T23:05:24Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
mistral-7b-ift - GGUF
- Model creator: https://huggingface.co/amdevraj/
- Original model: https://huggingface.co/amdevraj/mistral-7b-ift/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [mistral-7b-ift.Q2_K.gguf](https://huggingface.co/RichardErkhov/amdevraj_-_mistral-7b-ift-gguf/blob/main/mistral-7b-ift.Q2_K.gguf) | Q2_K | 2.54GB |
| [mistral-7b-ift.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/amdevraj_-_mistral-7b-ift-gguf/blob/main/mistral-7b-ift.IQ3_XS.gguf) | IQ3_XS | 2.82GB |
| [mistral-7b-ift.IQ3_S.gguf](https://huggingface.co/RichardErkhov/amdevraj_-_mistral-7b-ift-gguf/blob/main/mistral-7b-ift.IQ3_S.gguf) | IQ3_S | 2.97GB |
| [mistral-7b-ift.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/amdevraj_-_mistral-7b-ift-gguf/blob/main/mistral-7b-ift.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [mistral-7b-ift.IQ3_M.gguf](https://huggingface.co/RichardErkhov/amdevraj_-_mistral-7b-ift-gguf/blob/main/mistral-7b-ift.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [mistral-7b-ift.Q3_K.gguf](https://huggingface.co/RichardErkhov/amdevraj_-_mistral-7b-ift-gguf/blob/main/mistral-7b-ift.Q3_K.gguf) | Q3_K | 3.28GB |
| [mistral-7b-ift.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/amdevraj_-_mistral-7b-ift-gguf/blob/main/mistral-7b-ift.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [mistral-7b-ift.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/amdevraj_-_mistral-7b-ift-gguf/blob/main/mistral-7b-ift.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [mistral-7b-ift.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/amdevraj_-_mistral-7b-ift-gguf/blob/main/mistral-7b-ift.IQ4_XS.gguf) | IQ4_XS | 3.68GB |
| [mistral-7b-ift.Q4_0.gguf](https://huggingface.co/RichardErkhov/amdevraj_-_mistral-7b-ift-gguf/blob/main/mistral-7b-ift.Q4_0.gguf) | Q4_0 | 3.83GB |
| [mistral-7b-ift.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/amdevraj_-_mistral-7b-ift-gguf/blob/main/mistral-7b-ift.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [mistral-7b-ift.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/amdevraj_-_mistral-7b-ift-gguf/blob/main/mistral-7b-ift.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [mistral-7b-ift.Q4_K.gguf](https://huggingface.co/RichardErkhov/amdevraj_-_mistral-7b-ift-gguf/blob/main/mistral-7b-ift.Q4_K.gguf) | Q4_K | 4.07GB |
| [mistral-7b-ift.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/amdevraj_-_mistral-7b-ift-gguf/blob/main/mistral-7b-ift.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [mistral-7b-ift.Q4_1.gguf](https://huggingface.co/RichardErkhov/amdevraj_-_mistral-7b-ift-gguf/blob/main/mistral-7b-ift.Q4_1.gguf) | Q4_1 | 4.24GB |
| [mistral-7b-ift.Q5_0.gguf](https://huggingface.co/RichardErkhov/amdevraj_-_mistral-7b-ift-gguf/blob/main/mistral-7b-ift.Q5_0.gguf) | Q5_0 | 4.66GB |
| [mistral-7b-ift.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/amdevraj_-_mistral-7b-ift-gguf/blob/main/mistral-7b-ift.Q5_K_S.gguf) | Q5_K_S | 4.66GB |
| [mistral-7b-ift.Q5_K.gguf](https://huggingface.co/RichardErkhov/amdevraj_-_mistral-7b-ift-gguf/blob/main/mistral-7b-ift.Q5_K.gguf) | Q5_K | 4.78GB |
| [mistral-7b-ift.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/amdevraj_-_mistral-7b-ift-gguf/blob/main/mistral-7b-ift.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [mistral-7b-ift.Q5_1.gguf](https://huggingface.co/RichardErkhov/amdevraj_-_mistral-7b-ift-gguf/blob/main/mistral-7b-ift.Q5_1.gguf) | Q5_1 | 5.07GB |
| [mistral-7b-ift.Q6_K.gguf](https://huggingface.co/RichardErkhov/amdevraj_-_mistral-7b-ift-gguf/blob/main/mistral-7b-ift.Q6_K.gguf) | Q6_K | 5.54GB |
| [mistral-7b-ift.Q8_0.gguf](https://huggingface.co/RichardErkhov/amdevraj_-_mistral-7b-ift-gguf/blob/main/mistral-7b-ift.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
library_name: transformers
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.3
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- data/ift
model-index:
- name: mistral-7b-ift
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-ift
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3) on the data/ift dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9529
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9643 | 0.9989 | 455 | 1.0198 |
| 0.9077 | 2.0 | 911 | 0.9618 |
| 0.8919 | 2.9967 | 1365 | 0.9529 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Salm00n/gpt2-xl_RACE-H_v3 | Salm00n | 2025-04-21T01:38:18Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:openai-community/gpt2-xl",
"base_model:adapter:openai-community/gpt2-xl",
"license:mit",
"region:us"
] | null | 2025-04-21T01:38:07Z | ---
library_name: peft
license: mit
base_model: openai-community/gpt2-xl
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl_RACE-H_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl_RACE-H_v3
This model is a fine-tuned version of [openai-community/gpt2-xl](https://huggingface.co/openai-community/gpt2-xl) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.4540
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.6609 | 1.0 | 919 | 5.2022 |
| 6.417 | 2.0 | 1838 | 5.1851 |
| 5.351 | 3.0 | 2757 | 5.2672 |
| 4.3817 | 4.0 | 3676 | 6.2480 |
| 3.6784 | 5.0 | 4595 | 6.4540 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
TatvaRA/bge-base-financial-matryoshka | TatvaRA | 2025-04-21T01:33:43Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:6300",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-04-21T01:33:23Z | ---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6300
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-base-en-v1.5
widget:
- source_sentence: What are the main components of technology and infrastructure costs?
sentences:
- As of January 29, 2023, from the total aggregate lease obligations of $14.7 billion,
$1.5 billion was payable within 12 months.
- Technology and infrastructure costs include payroll and related expenses for employees
involved in the research and development of new and existing products and services,
development, design, and maintenance of our stores, curation and display of products
and services made available in our online stores, and infrastructure costs.
- '''Note 13 — Commitments and Contingencies — Litigation and Other Legal Matters''
is stated to be part of Part IV, Item 15 of the consolidated financial statements
within an Annual Report on Form 10-K.'
- source_sentence: How is Meta's workforce comprised in terms of diversity as of December
31, 2022?
sentences:
- As of December 31, 2022, our global employee base was composed of 45.4% underrepresented
people, with 47.9% underrepresented people in the U.S., and 43.1% of our leaders
in the U.S. being people of color.
- IBM's 2023 Annual Report to Stockholders includes the Financial Statements and
Supplementary Data on pages 44 through 121.
- Factors affecting the overall effective tax rate include acquisitions, changes
in corporate structures, location of business functions, the mix and amount of
income, agreements with tax authorities, and variations in estimated and actual
pre-tax income.
- source_sentence: What was the valuation allowance against deferred tax assets at
the end of 2023, and what changes may affect its realization?
sentences:
- At December 31, 2020, valuation allowances against deducted assets were $7.0 billion.
The ability to realize deductible benefits in future is contingent on producing
any estimated sufficient values in cash-generating, with effects are modifications
in trade situations, political of force, or those actions meaningfully impacting
on the values.
- Amazon considers its intellectual property essential for its success, utilizing
trademark, copyright, and patent law, trade-secret protection, and confidentiality
and/or license agreements to protect these rights.
- 'During 2023, AMC served as the theatrical distributor for two theatrical releases:
TAYLOR SWIFT | THE ERAS TOUR and RENAISSANCE: A FILM BY BEYONCÉ.'
- source_sentence: What significant services are included in Iron Mountain's service
revenues?
sentences:
- The decrease in net income in 2022 was primarily due to an increase in selling,
general and administrative expenses of $532.4 million, an impairment charge recognized
in 2022 of $407.9 million, an increase in income tax expense of $119.2 million,
partially offset by an increase in gross profit of $883.8 million, a decrease
in acquisition-related expenses of $41.4 million, a gain on disposal of assets
of $10.2 million, and an increase in other income (expense), net of $3.6 million.
- Service revenues include charges for the handling of records, destruction services,
digital solutions, and data center services.
- The total operating expenses for Chipotle Mexican Grill in 2023 amounted to $8,313,836.
- source_sentence: In which part and item of the Annual Report on Form 10-K can the
consolidated financial statements be found?
sentences:
- In order to maintain leadership, we optimize our portfolio with organic and inorganic
innovations and effective resource allocation. These investments not only drive
current performance but will extend our innovation leadership into the future.
- Our Consumer Wireline business unit offers AT&T Internet Air, which is a fixed
wireless access product that provides home internet services delivered over our
5G wireless network where available.
- The consolidated financial statements and accompanying notes listed in Part IV,
Item 15(a)(1) of this Annual Report on Form 10-K are included elsewhere in this
Annual Report on For... 10-K.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.7114285714285714
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8371428571428572
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.87
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9057142857142857
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7114285714285714
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.27904761904761904
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.174
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09057142857142855
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7114285714285714
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8371428571428572
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.87
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9057142857142857
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8110932340412786
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7804977324263039
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.784240984630403
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.7157142857142857
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.83
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.87
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9071428571428571
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7157142857142857
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.27666666666666667
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.174
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0907142857142857
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7157142857142857
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.83
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.87
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9071428571428571
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8116485651477514
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7810300453514737
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7845397715740386
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.7128571428571429
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8214285714285714
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.86
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9042857142857142
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7128571428571429
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.27380952380952384
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17199999999999996
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09042857142857143
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7128571428571429
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8214285714285714
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.86
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9042857142857142
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8071701520591847
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7762494331065761
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7797123012827435
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.71
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.81
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8442857142857143
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8985714285714286
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.71
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.27
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16885714285714284
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08985714285714284
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.71
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.81
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8442857142857143
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8985714285714286
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.801264041144764
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7705725623582764
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7744092505881914
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.6685714285714286
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.78
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8257142857142857
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8757142857142857
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6685714285714286
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.25999999999999995
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16514285714285715
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08757142857142856
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6685714285714286
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.78
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8257142857142857
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8757142857142857
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7698003192070297
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7363242630385484
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7409337390692949
name: Cosine Map@100
---
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("TatvaRA/bge-base-financial-matryoshka")
# Run inference
sentences = [
'In which part and item of the Annual Report on Form 10-K can the consolidated financial statements be found?',
'The consolidated financial statements and accompanying notes listed in Part IV, Item 15(a)(1) of this Annual Report on Form 10-K are included elsewhere in this Annual Report on For... 10-K.',
'Our Consumer Wireline business unit offers AT&T Internet Air, which is a fixed wireless access product that provides home internet services delivered over our 5G wireless network where available.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 768
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7114 |
| cosine_accuracy@3 | 0.8371 |
| cosine_accuracy@5 | 0.87 |
| cosine_accuracy@10 | 0.9057 |
| cosine_precision@1 | 0.7114 |
| cosine_precision@3 | 0.279 |
| cosine_precision@5 | 0.174 |
| cosine_precision@10 | 0.0906 |
| cosine_recall@1 | 0.7114 |
| cosine_recall@3 | 0.8371 |
| cosine_recall@5 | 0.87 |
| cosine_recall@10 | 0.9057 |
| **cosine_ndcg@10** | **0.8111** |
| cosine_mrr@10 | 0.7805 |
| cosine_map@100 | 0.7842 |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 512
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7157 |
| cosine_accuracy@3 | 0.83 |
| cosine_accuracy@5 | 0.87 |
| cosine_accuracy@10 | 0.9071 |
| cosine_precision@1 | 0.7157 |
| cosine_precision@3 | 0.2767 |
| cosine_precision@5 | 0.174 |
| cosine_precision@10 | 0.0907 |
| cosine_recall@1 | 0.7157 |
| cosine_recall@3 | 0.83 |
| cosine_recall@5 | 0.87 |
| cosine_recall@10 | 0.9071 |
| **cosine_ndcg@10** | **0.8116** |
| cosine_mrr@10 | 0.781 |
| cosine_map@100 | 0.7845 |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 256
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7129 |
| cosine_accuracy@3 | 0.8214 |
| cosine_accuracy@5 | 0.86 |
| cosine_accuracy@10 | 0.9043 |
| cosine_precision@1 | 0.7129 |
| cosine_precision@3 | 0.2738 |
| cosine_precision@5 | 0.172 |
| cosine_precision@10 | 0.0904 |
| cosine_recall@1 | 0.7129 |
| cosine_recall@3 | 0.8214 |
| cosine_recall@5 | 0.86 |
| cosine_recall@10 | 0.9043 |
| **cosine_ndcg@10** | **0.8072** |
| cosine_mrr@10 | 0.7762 |
| cosine_map@100 | 0.7797 |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 128
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.71 |
| cosine_accuracy@3 | 0.81 |
| cosine_accuracy@5 | 0.8443 |
| cosine_accuracy@10 | 0.8986 |
| cosine_precision@1 | 0.71 |
| cosine_precision@3 | 0.27 |
| cosine_precision@5 | 0.1689 |
| cosine_precision@10 | 0.0899 |
| cosine_recall@1 | 0.71 |
| cosine_recall@3 | 0.81 |
| cosine_recall@5 | 0.8443 |
| cosine_recall@10 | 0.8986 |
| **cosine_ndcg@10** | **0.8013** |
| cosine_mrr@10 | 0.7706 |
| cosine_map@100 | 0.7744 |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 64
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6686 |
| cosine_accuracy@3 | 0.78 |
| cosine_accuracy@5 | 0.8257 |
| cosine_accuracy@10 | 0.8757 |
| cosine_precision@1 | 0.6686 |
| cosine_precision@3 | 0.26 |
| cosine_precision@5 | 0.1651 |
| cosine_precision@10 | 0.0876 |
| cosine_recall@1 | 0.6686 |
| cosine_recall@3 | 0.78 |
| cosine_recall@5 | 0.8257 |
| cosine_recall@10 | 0.8757 |
| **cosine_ndcg@10** | **0.7698** |
| cosine_mrr@10 | 0.7363 |
| cosine_map@100 | 0.7409 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 6,300 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 20.16 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 45.99 tokens</li><li>max: 281 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What percentage of total revenues did STELARA account for in fiscal 2023 for the Company?</code> | <code>Sales of the Company’s largest product, STELARA (ustekinumab), accounted for approximately 12.8% of the Company's total revenues for fiscal 2023.</code> |
| <code>What is the effective date for the new accounting standard ASU No. 2022-04 regarding liabilities in supplier finance programs?</code> | <code>In September 2022, the FASB issued ASU No. 2022-04, “Liabilities—Supplier Finance Programs (Topic 405-50) - Disclosure of Supplier Finance Program Obligations,” which is effective for fiscal years beginning after December 15, 2022, including interim periods within those fiscal years.</code> |
| <code>What was the pre-tax net favorable prior period development for 2022 and what factors contributed to it?</code> | <code>The pre-tax net favorable prior period development for 2022 was $876 million. Adverse development factors like molestation claims, primarily reviver statute-related compromising $155 million, and $113 million related to legacy asbestos and environmental exposures significantly influenced this outcome.</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `fp16`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 0.8122 | 10 | 1.6789 | - | - | - | - | - |
| 0.9746 | 12 | - | 0.7976 | 0.8019 | 0.7944 | 0.7781 | 0.7387 |
| 1.6244 | 20 | 0.6377 | - | - | - | - | - |
| 1.9492 | 24 | - | 0.8071 | 0.8080 | 0.8016 | 0.7940 | 0.7594 |
| 2.4365 | 30 | 0.5295 | - | - | - | - | - |
| 2.9239 | 36 | - | 0.8110 | 0.8122 | 0.8067 | 0.8000 | 0.7697 |
| 3.2487 | 40 | 0.4367 | - | - | - | - | - |
| **3.8985** | **48** | **-** | **0.8111** | **0.8116** | **0.8072** | **0.8013** | **0.7698** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.12
- Sentence Transformers: 4.1.0
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 1.5.2
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
Subsets and Splits