modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
eleasimmen/elea | eleasimmen | 2025-05-29T10:10:03Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-29T09:34:07Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: elea
---
# Elea
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `elea` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "elea",
"lora_weights": "https://huggingface.co/eleasimmen/elea/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('eleasimmen/elea', weight_name='lora.safetensors')
image = pipeline('elea').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/eleasimmen/elea/discussions) to add images that show off what youโve made with this LoRA.
|
dhruvsangani/Feat_ChatBot2-GGUF | dhruvsangani | 2025-05-29T10:05:56Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T10:05:40Z | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** dhruvsangani
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Diamantis99/mbBpnRP | Diamantis99 | 2025-05-29T09:56:34Z | 0 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
] | image-segmentation | 2025-05-29T09:56:21Z | ---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# PAN Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "resnet152",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"encoder_output_stride": 16,
"decoder_channels": 32,
"decoder_interpolation": "bilinear",
"in_channels": 3,
"classes": 1,
"activation": None,
"upsampling": 4,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.8172159790992737,
"test_dataset_iou": 0.8556613922119141
}
]
```
## Dataset
Dataset name: VisionPipe
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
DmytroSerbeniuk/my-iris-model | DmytroSerbeniuk | 2025-05-29T09:48:59Z | 2 | 0 | scikit-learn | [
"scikit-learn",
"joblib",
"logistic_regression",
"license:apache-2.0",
"region:us"
] | null | 2025-05-28T16:31:58Z | ---
license: apache-2.0
library_name: scikit-learn
base_model:
- model.joblib
---
<p align="left">
<a href="https://test.org/">
<img
src="https://img.shields.io/badge/Website-0A66C2?logo=safari&logoColor=white" style="display: inline-block; vertical-align: middle;"
alt="Website"
/>
</a>
<a href="https://github.com/" target="_blank" style="margin: 2px;">
<img
alt="Github" src="https://img.shields.io/badge/Codebase-536af5?color=536af5&logo=github" style="display: inline-block; vertical-align: middle;"
alt="Codebase"
/>
</a>
<a href="https://demo.org/">
<img
src="https://img.shields.io/badge/Demo-blue?logo=googleplay&logoColor=white" style="display: inline-block; vertical-align: middle;"
alt="Demo"
/>
</a>
</p>
# ๐ธ Iris Species Classifier (Logistic Regression)
This repository provides a lightweight logistic regression model trained on the classic [Iris dataset](https://archive.ics.uci.edu/ml/datasets/iris) using **scikit-learn**. It is ideal for educational purposes, experimentation, and demonstration of inference on tabular data.
## ๐ง Model Overview
- **Algorithm:** Logistic Regression
- **Framework:** scikit-learn
- **Features:**
- `sepal_length`
- `sepal_width`
- `petal_length`
- `petal_width`
- **Target Classes:**
- `setosa`
- `versicolor`
- `virginica`
## ๐ Usage Example
### Python Code
```python
import joblib
import pandas as pd
model = joblib.load("model.joblib")
sample = pd.DataFrame([[5.1, 3.5, 1.4, 0.2]],
columns=["sepal_length", "sepal_width", "petal_length", "petal_width"])
prediction = model.predict(sample)
print(f"๐ธ Predicted class: {prediction[0]}")
```
## ๐ How to load the model
```python
from huggingface_hub import hf_hub_download
import joblib
model_path = hf_hub_download("DmytroSerbeniuk/my-iris-model", "model.joblib")
model = joblib.load(model_path)
```
## CLI Inference
You can also use the provided `inference.py` script:
```python
python3 inference.py 6.0 2.2 4.0 1.0
```
Expected Output:
```test
๐ธ Predicted class: versicolor
```
## ๐งช Sample Predictions
| Sepal Length | Sepal Width | Petal Length | Petal Width | Prediction |
| ------------ | ----------- | ------------ | ----------- | ---------- |
| 5.1 | 3.5 | 1.4 | 0.2 | setosa |
| 6.0 | 2.2 | 4.0 | 1.0 | versicolor |
| 6.9 | 3.1 | 5.4 | 2.1 | virginica |
## ๐ฆ Requirements
Dependencies listed in `requirements.txt`:
```text
scikit-learn
pandas
joblib
```
## License
my-iris-model is licensed under the Apache 2.0 license |
while0628/merged_model_WOQ_epoch401 | while0628 | 2025-05-29T09:48:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T09:45:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
prithivMLmods/facial-age-detection | prithivMLmods | 2025-05-29T09:41:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"siglip",
"image-classification",
"age-detection",
"SigLIP2",
"biology",
"en",
"dataset:prithivMLmods/Face-Age-10K",
"arxiv:2502.14786",
"base_model:google/siglip2-base-patch16-512",
"base_model:finetune:google/siglip2-base-patch16-512",
"doi:10.57967/hf/5659",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-27T16:37:21Z | ---
license: apache-2.0
datasets:
- prithivMLmods/Face-Age-10K
language:
- en
base_model:
- google/siglip2-base-patch16-512
pipeline_tag: image-classification
library_name: transformers
tags:
- age-detection
- SigLIP2
- biology
---

# facial-age-detection
> facial-age-detection is a vision-language encoder model fine-tuned from `google/siglip2-base-patch16-512` for **multi-class image classification**. It is trained to detect and classify human faces into **age groups** ranging from early childhood to elderly adults. The model uses the `SiglipForImageClassification` architecture.
> \[!note]
> SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features
> [https://arxiv.org/pdf/2502.14786](https://arxiv.org/pdf/2502.14786)
```py
Classification Report:
precision recall f1-score support
age 01-10 0.9614 0.9669 0.9641 2474
age 11-20 0.8418 0.8467 0.8442 1181
age 21-30 0.8118 0.8326 0.8220 1523
age 31-40 0.6937 0.6683 0.6808 1010
age 41-55 0.7106 0.7528 0.7311 1181
age 56-65 0.6878 0.6646 0.6760 799
age 66-80 0.7949 0.7596 0.7768 653
age 80 + 0.9349 0.8343 0.8817 344
accuracy 0.8225 9165
macro avg 0.8046 0.7907 0.7971 9165
weighted avg 0.8226 0.8225 0.8223 9165
```

---
## Label Space: 8 Classes
```
Class 0: age 01-10
Class 1: age 11-20
Class 2: age 21-30
Class 3: age 31-40
Class 4: age 41-55
Class 5: age 56-65
Class 6: age 66-80
Class 7: age 80 +
```
---
## Install Dependencies
```bash
pip install -q transformers torch pillow gradio hf_xet
```
---
## Inference Code
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/facial-age-detection" # Update with actual model name on Hugging Face
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# Updated label mapping
id2label = {
"0": "age 01-10",
"1": "age 11-20",
"2": "age 21-30",
"3": "age 31-40",
"4": "age 41-55",
"5": "age 56-65",
"6": "age 66-80",
"7": "age 80 +"
}
def classify_image(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
prediction = {
id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))
}
return prediction
# Gradio Interface
iface = gr.Interface(
fn=classify_image,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=8, label="Age Group Classification"),
title="Facial Age Detection",
description="Upload a face image to estimate the age group: 01โ10, 11โ20, 21โ30, 31โ40, 41โ55, 56โ65, 66โ80, or 80+."
)
if __name__ == "__main__":
iface.launch()
```
---
## Intended Use
`facial-age-detection` is designed for:
* **Demographic Analytics** โ Estimate age distributions in image datasets for research and commercial analysis.
* **Access Control & Verification** โ Enforce age-based access in digital or physical environments.
* **Retail & Marketing** โ Understand customer demographics in retail spaces through camera-based analytics.
* **Surveillance & Security** โ Enhance people classification systems by integrating age detection.
* **Human-Computer Interaction** โ Adapt experiences and interfaces based on user age. |
Nerva1228/huakuku | Nerva1228 | 2025-05-29T09:39:59Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-29T09:13:25Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: huakuku
---
# Huakuku
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `huakuku` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "huakuku",
"lora_weights": "https://huggingface.co/Nerva1228/huakuku/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nerva1228/huakuku', weight_name='lora.safetensors')
image = pipeline('huakuku').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Nerva1228/huakuku/discussions) to add images that show off what youโve made with this LoRA.
|
Sucube131/lora-sofia_v2 | Sucube131 | 2025-05-29T09:39:27Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-29T08:57:39Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Sofia
---
# Lora Sofia_V2
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Sofia` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Sofia",
"lora_weights": "https://huggingface.co/Sucube131/lora-sofia_v2/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Sucube131/lora-sofia_v2', weight_name='lora.safetensors')
image = pipeline('Sofia').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 4000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Sucube131/lora-sofia_v2/discussions) to add images that show off what youโve made with this LoRA.
|
Killer-C0wboy/phi4_ibps | Killer-C0wboy | 2025-05-29T09:20:51Z | 0 | 0 | null | [
"safetensors",
"law",
"bail",
"bail-prediction",
"IIT",
"IITK",
"IBPS",
"text-generation",
"en",
"base_model:microsoft/phi-4",
"base_model:finetune:microsoft/phi-4",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-05-29T09:14:23Z | ---
license: apache-2.0
language:
- en
base_model:
- microsoft/phi-4
pipeline_tag: text-generation
tags:
- law
- bail
- bail-prediction
- IIT
- IITK
- IBPS
---
As of now using the model's own tokenizer causes errors sometimes. If that happens use phi-4's own tokenizer. |
maghwa/llama-3.1-8b-powl-improved-100steps-merged | maghwa | 2025-05-29T09:19:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T09:08:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kkkkl5/asdkklkk | kkkkl5 | 2025-05-29T09:13:29Z | 2 | 0 | null | [
"pytorch",
"onnx",
"safetensors",
"modernbert",
"region:us"
] | null | 2025-05-29T05:38:42Z | ### ไปฅไธๆจกๅไธบๆฐๆฎๅพฎ่ฐๅ็ModernBERT-baseๆจกๅ๏ผ็จไบpan2025-task1๏ผ้ๅ็จ๏ผ่ฅๆไพตๆๆ้่ฏทๅๆถ็ง่ไฝ่
ๅๆถไธๆถ๏ผๆ่ฐขๅ
ๆถต |
hdong0/Qwen2.5-Math-1.5B-Open-R1-Distill_4epoch | hdong0 | 2025-05-29T09:07:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:open-r1/OpenR1-Math-220k",
"base_model:Qwen/Qwen2.5-Math-1.5B",
"base_model:finetune:Qwen/Qwen2.5-Math-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T01:04:21Z | ---
base_model: Qwen/Qwen2.5-Math-1.5B
datasets: open-r1/OpenR1-Math-220k
library_name: transformers
model_name: Qwen2.5-Math-1.5B-Open-R1-Distill_4epoch
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-Math-1.5B-Open-R1-Distill_4epoch
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hdong0/Qwen2.5-Math-1.5B-Open-R1-Distill_4epoch", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
tungduong261204/sft_9000 | tungduong261204 | 2025-05-29T09:06:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T09:05:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LarryAIDraw/Genshin_Signora-000001 | LarryAIDraw | 2025-05-29T09:02:27Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-29T08:25:36Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/1618395/genshin-signora |
IntMeGroup/DFBench_qwen25 | IntMeGroup | 2025-05-29T09:01:41Z | 0 | 0 | null | [
"safetensors",
"qwen2_5_vl",
"license:apache-2.0",
"region:us"
] | null | 2025-05-29T07:13:57Z | ---
license: apache-2.0
---
|
lalallaal7777/finance-llm | lalallaal7777 | 2025-05-29T08:59:07Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
] | null | 2025-05-29T08:59:04Z | ---
base_model: mistralai/Mistral-7B-Instruct-v0.1
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
while0628/1B_merged_model_dt_1_lora300 | while0628 | 2025-05-29T08:48:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T08:45:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmb92t64204r01b1ykpvg18oo_cmb93cbw7054f1b1y3nmlc89f | BootesVoid | 2025-05-29T08:35:30Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-29T08:35:29Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: sienna_cruz
---
# Cmb92T64204R01B1Ykpvg18Oo_Cmb93Cbw7054F1B1Y3Nmlc89F
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `sienna_cruz` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "sienna_cruz",
"lora_weights": "https://huggingface.co/BootesVoid/cmb92t64204r01b1ykpvg18oo_cmb93cbw7054f1b1y3nmlc89f/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb92t64204r01b1ykpvg18oo_cmb93cbw7054f1b1y3nmlc89f', weight_name='lora.safetensors')
image = pipeline('sienna_cruz').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb92t64204r01b1ykpvg18oo_cmb93cbw7054f1b1y3nmlc89f/discussions) to add images that show off what youโve made with this LoRA.
|
BootesVoid/cmb92evzk04g21b1y80xizpff_cmb92l2ab04lq1b1ytwn5zjlb | BootesVoid | 2025-05-29T08:34:23Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-29T08:34:22Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: aylina_dev
---
# Cmb92Evzk04G21B1Y80Xizpff_Cmb92L2Ab04Lq1B1Ytwn5Zjlb
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `aylina_dev` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "aylina_dev",
"lora_weights": "https://huggingface.co/BootesVoid/cmb92evzk04g21b1y80xizpff_cmb92l2ab04lq1b1ytwn5zjlb/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb92evzk04g21b1y80xizpff_cmb92l2ab04lq1b1ytwn5zjlb', weight_name='lora.safetensors')
image = pipeline('aylina_dev').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb92evzk04g21b1y80xizpff_cmb92l2ab04lq1b1ytwn5zjlb/discussions) to add images that show off what youโve made with this LoRA.
|
BootesVoid/cmb915d9w03n61b1y89idkvhm_cmb9259pd04a41b1ysxb0qqhq | BootesVoid | 2025-05-29T08:34:20Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-29T08:34:18Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: emma_dixit
---
# Cmb915D9W03N61B1Y89Idkvhm_Cmb9259Pd04A41B1Ysxb0Qqhq
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `emma_dixit` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "emma_dixit",
"lora_weights": "https://huggingface.co/BootesVoid/cmb915d9w03n61b1y89idkvhm_cmb9259pd04a41b1ysxb0qqhq/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb915d9w03n61b1y89idkvhm_cmb9259pd04a41b1ysxb0qqhq', weight_name='lora.safetensors')
image = pipeline('emma_dixit').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb915d9w03n61b1y89idkvhm_cmb9259pd04a41b1ysxb0qqhq/discussions) to add images that show off what youโve made with this LoRA.
|
nuwooo/test-upload | nuwooo | 2025-05-29T08:22:40Z | 0 | 0 | null | [
"t5",
"en",
"dataset:c4",
"arxiv:2002.05202",
"arxiv:1910.10683",
"license:apache-2.0",
"region:us"
] | null | 2025-05-28T13:53:09Z | ---
language: en
datasets:
- c4
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Version 1.1
## Version 1.1
[T5 Version 1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/released_checkpoints.md#t511) includes the following improvements compared to the original T5 model- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202).
- Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning.
- Pre-trained on C4 only without mixing in the downstream tasks.
- no parameter sharing between embedding and classifier layer
- "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger `d_model` and smaller `num_heads` and `d_ff`.
**Note**: T5 Version 1.1 was only pre-trained on C4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
Pretraining Dataset: [C4](https://huggingface.co/datasets/c4)
Other Community Checkpoints: [here](https://huggingface.co/models?search=t5-v1_1)
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new โColossal Clean Crawled Corpusโ, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

|
jack-perlo/MobileNet-Imagenet2012 | jack-perlo | 2025-05-29T08:18:50Z | 0 | 0 | keras | [
"keras",
"tflite",
"license:apache-2.0",
"region:us"
] | null | 2025-05-29T08:17:10Z | ---
license: apache-2.0
---
|
AJNG/olm_ocr_v2_merge | AJNG | 2025-05-29T08:11:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:allenai/olmOCR-7B-0225-preview",
"base_model:finetune:allenai/olmOCR-7B-0225-preview",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-29T08:10:43Z | ---
base_model: allenai/olmOCR-7B-0225-preview
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** AJNG
- **License:** apache-2.0
- **Finetuned from model :** allenai/olmOCR-7B-0225-preview
This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Nastoi/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-curious_nocturnal_sandpiper | Nastoi | 2025-05-29T08:05:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am curious nocturnal sandpiper",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T14:11:41Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-curious_nocturnal_sandpiper
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am curious nocturnal sandpiper
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-curious_nocturnal_sandpiper
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Nastoi/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-curious_nocturnal_sandpiper", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Varinder2110/cb48a888-a11c-481c-9342-edbe79d71a40 | Varinder2110 | 2025-05-29T08:01:00Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-29T06:54:23Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Cb48A888 A11C 481C 9342 Edbe79D71A40
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/Varinder2110/cb48a888-a11c-481c-9342-edbe79d71a40/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Varinder2110/cb48a888-a11c-481c-9342-edbe79d71a40', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 6000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Varinder2110/cb48a888-a11c-481c-9342-edbe79d71a40/discussions) to add images that show off what youโve made with this LoRA.
|
ankhanhtran02/Llama-DPO-3 | ankhanhtran02 | 2025-05-29T07:53:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:unsloth/Llama-3.2-1B",
"base_model:finetune:unsloth/Llama-3.2-1B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T06:50:14Z | ---
base_model: unsloth/Llama-3.2-1B
library_name: transformers
model_name: Llama-DPO-3
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Llama-DPO-3
This model is a fine-tuned version of [unsloth/Llama-3.2-1B](https://huggingface.co/unsloth/Llama-3.2-1B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ankhanhtran02/Llama-DPO-3", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/khanh-hust/huggingface/runs/gocdq2fj)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
HuangXinBa/q-FrozenLake-v1-8x8-noSlippery | HuangXinBa | 2025-05-29T07:52:55Z | 0 | 0 | null | [
"FrozenLake-v1-8x8-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-29T07:16:25Z | ---
tags:
- FrozenLake-v1-8x8-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8-no_slippery
type: FrozenLake-v1-8x8-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="HuangXinBa/q-FrozenLake-v1-8x8-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
huangqishan/cnn | huangqishan | 2025-05-29T07:51:03Z | 58 | 0 | transformers | [
"transformers",
"safetensors",
"cnn_model",
"image-classification",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | image-classification | 2025-05-27T00:33:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
crystalai/thoth-emerald-cybersecurity-shield-ai-auto-train-autotrain-advanced | crystalai | 2025-05-29T07:41:49Z | 0 | 1 | adapter-transformers | [
"adapter-transformers",
"chemistry",
"biology",
"legal",
"music",
"art",
"code",
"climate",
"text-generation-inference",
"merge",
"dataset:DMindAI/DMind_Benchmark",
"dataset:nvidia/OpenMathReasoning",
"dataset:nvidia/OpenCodeReasoning",
"base_model:ACE-Step/ACE-Step-v1-3.5B",
"base_model:adapter:ACE-Step/ACE-Step-v1-3.5B",
"license:c-uda",
"region:us"
] | null | 2025-05-29T07:39:14Z | ---
license: c-uda
datasets:
- DMindAI/DMind_Benchmark
- nvidia/OpenMathReasoning
- nvidia/OpenCodeReasoning
metrics:
- character
- accuracy
base_model:
- ACE-Step/ACE-Step-v1-3.5B
- google/gemma-3n-E4B-it-litert-preview
new_version: nvidia/parakeet-tdt-0.6b-v2
library_name: adapter-transformers
tags:
- chemistry
- biology
- legal
- music
- art
- code
- climate
- text-generation-inference
- merge
--- |
kmpartner/k5pcmlra-test | kmpartner | 2025-05-29T07:40:12Z | 150 | 0 | peft | [
"peft",
"tensorboard",
"diffusers",
"safetensors",
"arxiv:1910.09700",
"base_model:kmpartner/k512-ft-test",
"base_model:adapter:kmpartner/k512-ft-test",
"region:us"
] | null | 2025-04-27T12:32:18Z | ---
base_model: kmpartner/k512-ft-test
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
- PEFT 0.14.0 |
fuji12345/segformer-finetuned-sidewalk-10k-steps | fuji12345 | 2025-05-29T07:25:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"segformer",
"image-segmentation",
"vision",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2025-05-29T07:23:23Z | ---
library_name: transformers
license: other
base_model: nvidia/mit-b0
tags:
- image-segmentation
- vision
- generated_from_trainer
model-index:
- name: segformer-finetuned-sidewalk-10k-steps
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-finetuned-sidewalk-10k-steps
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the fuji12345/sample-segmentation_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1181
- Mean Iou: 0.0
- Mean Accuracy: nan
- Overall Accuracy: nan
- Accuracy Hallucination: nan
- Accuracy Normal: nan
- Iou Hallucination: 0.0
- Iou Normal: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: polynomial
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Hallucination | Accuracy Normal | Iou Hallucination | Iou Normal |
|:-------------:|:------:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------------:|:---------------:|:-----------------:|:----------:|
| No log | 1.0 | 21 | 0.4468 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 |
| No log | 2.0 | 42 | 0.2196 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 |
| No log | 3.0 | 63 | 0.2136 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 |
| No log | 4.0 | 84 | 0.1816 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 |
| 0.3217 | 5.0 | 105 | 0.1449 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 |
| 0.3217 | 6.0 | 126 | 0.1380 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 |
| 0.3217 | 7.0 | 147 | 0.1239 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 |
| 0.3217 | 8.0 | 168 | 0.1247 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 |
| 0.3217 | 9.0 | 189 | 0.1163 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 |
| 0.1596 | 9.5238 | 200 | 0.1181 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Khal5454/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bristly_beaked_hedgehog | Khal5454 | 2025-05-29T07:18:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am bristly beaked hedgehog",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-15T00:41:50Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bristly_beaked_hedgehog
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am bristly beaked hedgehog
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bristly_beaked_hedgehog
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Khal5454/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bristly_beaked_hedgehog", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
yamatazen/HMS-Fusion-12B | yamatazen | 2025-05-29T07:15:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"ja",
"base_model:shisa-ai/shisa-v2-mistral-nemo-12b",
"base_model:merge:shisa-ai/shisa-v2-mistral-nemo-12b",
"base_model:yamatazen/Himeyuri-Magnum-12B",
"base_model:merge:yamatazen/Himeyuri-Magnum-12B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T06:24:47Z | ---
base_model:
- yamatazen/Himeyuri-Magnum-12B
- shisa-ai/shisa-v2-mistral-nemo-12b
library_name: transformers
tags:
- mergekit
- merge
language:
- en
- ja
---

# HMS-Fusion-12B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Arcee Fusion](https://arcee.ai) merge method using [shisa-ai/shisa-v2-mistral-nemo-12b](https://huggingface.co/shisa-ai/shisa-v2-mistral-nemo-12b) as a base.
### Models Merged
The following models were included in the merge:
* [yamatazen/Himeyuri-Magnum-12B](https://huggingface.co/yamatazen/Himeyuri-Magnum-12B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: arcee_fusion
dtype: bfloat16
out_dtype: bfloat16
base_model: shisa-ai/shisa-v2-mistral-nemo-12b
models:
- model: yamatazen/Himeyuri-Magnum-12B
``` |
AshwiniFromIITK/gemma-3-0_1b_NewDS1.0 | AshwiniFromIITK | 2025-05-29T07:13:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"trl",
"en",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T07:13:30Z | ---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AshwiniFromIITK
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
unsloth/DeepSeek-R1-GGUF | unsloth | 2025-05-29T07:12:29Z | 80,628 | 1,074 | transformers | [
"transformers",
"gguf",
"deepseek_v3",
"text-generation",
"deepseek",
"unsloth",
"custom_code",
"en",
"arxiv:2501.12948",
"base_model:deepseek-ai/DeepSeek-R1",
"base_model:quantized:deepseek-ai/DeepSeek-R1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-01-20T13:09:42Z | ---
base_model: deepseek-ai/DeepSeek-R1
language:
- en
library_name: transformers
license: mit
tags:
- deepseek
- unsloth
- transformers
new_version: unsloth/DeepSeek-R1-GGUF-UD
---
<div>
<p style="margin-bottom: 0; margin-top: 0;">
<strong>See <a href="https://huggingface.co/collections/unsloth/deepseek-r1-all-versions-678e1c48f5d2fce87892ace5">our collection</a> for versions of Deepseek-R1 including GGUF & 4-bit formats.</strong>
</p>
<p style="margin-bottom: 0;">
<em>Unsloth's DeepSeek-R1 <a href="https://unsloth.ai/blog/deepseekr1-dynamic">1.58-bit + 2-bit Dynamic Quants</a> is selectively quantized, greatly improving accuracy over standard 1-bit/2-bit.</em>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/unslothai/unsloth/">
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
</a>
<a href="https://discord.gg/unsloth">
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
</a>
<a href="https://docs.unsloth.ai/basics/tutorial-how-to-run-deepseek-r1-on-your-own-local-device">
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
</a>
</div>
<h1 style="margin-top: 0rem;">Instructions to run this model in llama.cpp:</h2>
</div>
Or you can view more detailed instructions here: [unsloth.ai/blog/deepseekr1-dynamic](https://unsloth.ai/blog/deepseekr1-dynamic)
1. Do not forget about `<๏ฝUser๏ฝ>` and `<๏ฝAssistant๏ฝ>` tokens! - Or use a chat template formatter
2. Obtain the latest `llama.cpp` at https://github.com/ggerganov/llama.cpp. You can follow the build instructions below as well:
```bash
apt-get update
apt-get install build-essential cmake curl libcurl4-openssl-dev -y
git clone https://github.com/ggerganov/llama.cpp
cmake llama.cpp -B llama.cpp/build \
-DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split
cp llama.cpp/build/bin/llama-* llama.cpp
```
3. It's best to use `--min-p 0.05` to counteract very rare token predictions - I found this to work well especially for the 1.58bit model.
4. Download the model via:
```python
# pip install huggingface_hub hf_transfer
# import os # Optional for faster downloading
# os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
from huggingface_hub import snapshot_download
snapshot_download(
repo_id = "unsloth/DeepSeek-R1-GGUF",
local_dir = "DeepSeek-R1-GGUF",
allow_patterns = ["*UD-IQ1_S*"], # Select quant type UD-IQ1_S for 1.58bit
)
```
5. Example with Q4_0 K quantized cache **Notice -no-cnv disables auto conversation mode**
```bash
./llama.cpp/llama-cli \
--model DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \
--cache-type-k q4_0 \
--threads 12 -no-cnv --prio 2 \
--temp 0.6 \
--ctx-size 8192 \
--seed 3407 \
--prompt "<๏ฝUser๏ฝ>Create a Flappy Bird game in Python.<๏ฝAssistant๏ฝ>"
```
Example output:
```txt
<think>
Okay, so I need to figure out what 1 plus 1 is. Hmm, where do I even start? I remember from school that adding numbers is pretty basic, but I want to make sure I understand it properly.
Let me think, 1 plus 1. So, I have one item and I add another one. Maybe like a apple plus another apple. If I have one apple and someone gives me another, I now have two apples. So, 1 plus 1 should be 2. That makes sense.
Wait, but sometimes math can be tricky. Could it be something else? Like, in a different number system maybe? But I think the question is straightforward, using regular numbers, not like binary or hexadecimal or anything.
I also recall that in arithmetic, addition is combining quantities. So, if you have two quantities of 1, combining them gives you a total of 2. Yeah, that seems right.
Is there a scenario where 1 plus 1 wouldn't be 2? I can't think of any...
```
6. If you have a GPU (RTX 4090 for example) with 24GB, you can offload multiple layers to the GPU for faster processing. If you have multiple GPUs, you can probably offload more layers.
```bash
./llama.cpp/llama-cli \
--model DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \
--cache-type-k q4_0 \
--threads 12 -no-cnv --prio 2 \
--n-gpu-layers 7 \
--temp 0.6 \
--ctx-size 8192 \
--seed 3407 \
--prompt "<๏ฝUser๏ฝ>Create a Flappy Bird game in Python.<๏ฝAssistant๏ฝ>"
```
7. If you want to merge the weights together, use this script:
```
./llama.cpp/llama-gguf-split --merge \
DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \
merged_file.gguf
```
| MoE Bits | Type | Disk Size | Accuracy | Link | Details |
| -------- | -------- | ------------ | ------------ | ---------------------| ---------- |
| 1.58bit | UD-IQ1_S | **131GB** | Fair | [Link](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-IQ1_S) | MoE all 1.56bit. `down_proj` in MoE mixture of 2.06/1.56bit |
| 1.73bit | UD-IQ1_M | **158GB** | Good | [Link](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-IQ1_M) | MoE all 1.56bit. `down_proj` in MoE left at 2.06bit |
| 2.22bit | UD-IQ2_XXS | **183GB** | Better | [Link](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-IQ2_XXS) | MoE all 2.06bit. `down_proj` in MoE mixture of 2.5/2.06bit |
| 2.51bit | UD-Q2_K_XL | **212GB** | Best | [Link](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-Q2_K_XL) | MoE all 2.5bit. `down_proj` in MoE mixture of 3.5/2.5bit |
# Finetune your own Reasoning model like R1 with Unsloth!
We have a free Google Colab notebook for turning Llama 3.1 (8B) into a reasoning model: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-GRPO.ipynb
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## โจ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **GRPO with Phi-4 (14B)** | [โถ๏ธ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4_(14B)-GRPO.ipynb) | 2x faster | 80% less |
| **Llama-3.2 (3B)** | [โถ๏ธ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [โถ๏ธ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less |
| **Qwen2 VL (7B)** | [โถ๏ธ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2_VL_(7B)-Vision.ipynb) | 1.8x faster | 60% less |
| **Qwen2.5 (7B)** | [โถ๏ธ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less |
| **Llama-3.1 (8B)** | [โถ๏ธ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [โถ๏ธ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_3.5_Mini-Conversational.ipynb) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [โถ๏ธ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma2_(9B)-Alpaca.ipynb) | 2.4x faster | 58% less |
| **Mistral (7B)** | [โถ๏ธ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) | 2.2x faster | 62% less |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="200"/>](https://docs.unsloth.ai)
- This [Llama 3.2 conversational notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_(7B)-Text_Completion.ipynb) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the DeepSeek team for creating and releasing these models.
# DeepSeek-R1
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/๐ค%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE-CODE" style="margin: 2px;">
<img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE-MODEL" style="margin: 2px;">
<img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>๐๏ธ</a>
</p>
## 1. Introduction
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.
DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning.
With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.
However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance,
we introduce DeepSeek-R1, which incorporates cold-start data before RL.
DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
**NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the [Usage Recommendation](#usage-recommendations) section.**
<p align="center">
<img width="80%" src="figures/benchmark.jpg">
</p>
## 2. Model Summary
---
**Post-Training: Large-Scale Reinforcement Learning on the Base Model**
- We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
- We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities.
We believe the pipeline will benefit the industry by creating better models.
---
**Distillation: Smaller Models Can Be Powerful Too**
- We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future.
- Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
## 3. Model Downloads
### DeepSeek-R1 Models
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-R1-Zero | 671B | 37B | 128K | [๐ค HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) |
| DeepSeek-R1 | 671B | 37B | 128K | [๐ค HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
</div>
DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base.
For more details regarding the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository.
### DeepSeek-R1-Distill Models
<div align="center">
| **Model** | **Base Model** | **Download** |
| :------------: | :------------: | :------------: |
| DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [๐ค HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) |
| DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [๐ค HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) |
| DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [๐ค HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) |
| DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [๐ค HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) |
|DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [๐ค HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) |
| DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [๐ค HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) |
</div>
DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1.
We slightly change their configs and tokenizers. Please use our setting to run these models.
## 4. Evaluation Results
### DeepSeek-R1-Evaluation
For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1.
<div align="center">
| Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
|----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------|
| | Architecture | - | - | MoE | - | - | MoE |
| | # Activated Params | - | - | 37B | - | - | 37B |
| | # Total Params | - | - | 671B | - | - | 671B |
| English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 |
| | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** |
| | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** |
| | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** |
| | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 |
| | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 |
| | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 |
| | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** |
| | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** |
| | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** |
| Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** |
| | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 |
| | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 |
| | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 |
| | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 |
| Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** |
| | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** |
| | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** |
| Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** |
| | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** |
| | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 |
</div>
### Distilled Model Evaluation
<div align="center">
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
| GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
| Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
| o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** |
| QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
| DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
| DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
| DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
| DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
| DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 |
</div>
## 5. Chat Website & API Platform
You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink"
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
## 6. How to Run Locally
### DeepSeek-R1 Models
Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally.
### DeepSeek-R1-Distill Models
DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models.
For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm):
```shell
vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
```
You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang)
```bash
python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2
```
### Usage Recommendations
**We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:**
1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs.
2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.**
3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}."
4. When evaluating model performance, it is recommended to conduct multiple tests and average the results.
## 7. License
This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE).
DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
- DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1.
- DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
- DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
## 8. Citation
```
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI and Daya Guo and Dejian Yang and Haowei Zhang and Junxiao Song and Ruoyu Zhang and Runxin Xu and Qihao Zhu and Shirong Ma and Peiyi Wang and Xiao Bi and Xiaokang Zhang and Xingkai Yu and Yu Wu and Z. F. Wu and Zhibin Gou and Zhihong Shao and Zhuoshu Li and Ziyi Gao and Aixin Liu and Bing Xue and Bingxuan Wang and Bochao Wu and Bei Feng and Chengda Lu and Chenggang Zhao and Chengqi Deng and Chenyu Zhang and Chong Ruan and Damai Dai and Deli Chen and Dongjie Ji and Erhang Li and Fangyun Lin and Fucong Dai and Fuli Luo and Guangbo Hao and Guanting Chen and Guowei Li and H. Zhang and Han Bao and Hanwei Xu and Haocheng Wang and Honghui Ding and Huajian Xin and Huazuo Gao and Hui Qu and Hui Li and Jianzhong Guo and Jiashi Li and Jiawei Wang and Jingchang Chen and Jingyang Yuan and Junjie Qiu and Junlong Li and J. L. Cai and Jiaqi Ni and Jian Liang and Jin Chen and Kai Dong and Kai Hu and Kaige Gao and Kang Guan and Kexin Huang and Kuai Yu and Lean Wang and Lecong Zhang and Liang Zhao and Litong Wang and Liyue Zhang and Lei Xu and Leyi Xia and Mingchuan Zhang and Minghua Zhang and Minghui Tang and Meng Li and Miaojun Wang and Mingming Li and Ning Tian and Panpan Huang and Peng Zhang and Qiancheng Wang and Qinyu Chen and Qiushi Du and Ruiqi Ge and Ruisong Zhang and Ruizhe Pan and Runji Wang and R. J. Chen and R. L. Jin and Ruyi Chen and Shanghao Lu and Shangyan Zhou and Shanhuang Chen and Shengfeng Ye and Shiyu Wang and Shuiping Yu and Shunfeng Zhou and Shuting Pan and S. S. Li and Shuang Zhou and Shaoqing Wu and Shengfeng Ye and Tao Yun and Tian Pei and Tianyu Sun and T. Wang and Wangding Zeng and Wanjia Zhao and Wen Liu and Wenfeng Liang and Wenjun Gao and Wenqin Yu and Wentao Zhang and W. L. Xiao and Wei An and Xiaodong Liu and Xiaohan Wang and Xiaokang Chen and Xiaotao Nie and Xin Cheng and Xin Liu and Xin Xie and Xingchao Liu and Xinyu Yang and Xinyuan Li and Xuecheng Su and Xuheng Lin and X. Q. Li and Xiangyue Jin and Xiaojin Shen and Xiaosha Chen and Xiaowen Sun and Xiaoxiang Wang and Xinnan Song and Xinyi Zhou and Xianzu Wang and Xinxia Shan and Y. K. Li and Y. Q. Wang and Y. X. Wei and Yang Zhang and Yanhong Xu and Yao Li and Yao Zhao and Yaofeng Sun and Yaohui Wang and Yi Yu and Yichao Zhang and Yifan Shi and Yiliang Xiong and Ying He and Yishi Piao and Yisong Wang and Yixuan Tan and Yiyang Ma and Yiyuan Liu and Yongqiang Guo and Yuan Ou and Yuduan Wang and Yue Gong and Yuheng Zou and Yujia He and Yunfan Xiong and Yuxiang Luo and Yuxiang You and Yuxuan Liu and Yuyang Zhou and Y. X. Zhu and Yanhong Xu and Yanping Huang and Yaohui Li and Yi Zheng and Yuchen Zhu and Yunxian Ma and Ying Tang and Yukun Zha and Yuting Yan and Z. Z. Ren and Zehui Ren and Zhangli Sha and Zhe Fu and Zhean Xu and Zhenda Xie and Zhengyan Zhang and Zhewen Hao and Zhicheng Ma and Zhigang Yan and Zhiyu Wu and Zihui Gu and Zijia Zhu and Zijun Liu and Zilin Li and Ziwei Xie and Ziyang Song and Zizheng Pan and Zhen Huang and Zhipeng Xu and Zhongyu Zhang and Zhen Zhang},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
```
## 9. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]). |
YuchenLi01/genParaMoreUniqueResNoGT_Qwen2.5-1.5BInstruct_dpo_ebs32_lr3e-06_beta0.1_42 | YuchenLi01 | 2025-05-29T07:11:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:YuchenLi01/MATH_Qwen2.5-1.5BInstruct_DPO_generatedAndParaphrasedMoreUniqueResponseNoGT",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T03:09:11Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- YuchenLi01/MATH_Qwen2.5-1.5BInstruct_DPO_generatedAndParaphrasedMoreUniqueResponseNoGT
model-index:
- name: genParaMoreUniqueResNoGT_Qwen2.5-1.5BInstruct_dpo_ebs32_lr3e-06_beta0.1_42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# genParaMoreUniqueResNoGT_Qwen2.5-1.5BInstruct_dpo_ebs32_lr3e-06_beta0.1_42
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the YuchenLi01/MATH_Qwen2.5-1.5BInstruct_DPO_generatedAndParaphrasedMoreUniqueResponseNoGT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7986
- Rewards/chosen: -8.2096
- Rewards/rejected: -11.0162
- Rewards/accuracies: 0.7195
- Rewards/margins: 2.8065
- Logps/rejected: -157.9193
- Logps/chosen: -124.5607
- Logits/rejected: -2.6784
- Logits/chosen: -2.8351
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6932 | 0.0135 | 20 | 0.6935 | 0.0027 | 0.0006 | 0.5152 | 0.0021 | -47.7519 | -42.4376 | -2.2058 | -2.3115 |
| 0.698 | 0.0270 | 40 | 0.6918 | -0.0110 | -0.0156 | 0.5305 | 0.0046 | -47.9133 | -42.5743 | -2.1930 | -2.2989 |
| 0.6833 | 0.0405 | 60 | 0.6886 | -0.0591 | -0.0721 | 0.6067 | 0.0130 | -48.4787 | -43.0550 | -2.1562 | -2.2633 |
| 0.6758 | 0.0540 | 80 | 0.6822 | -0.1525 | -0.1733 | 0.5915 | 0.0209 | -49.4910 | -43.9891 | -2.0868 | -2.1942 |
| 0.6368 | 0.0675 | 100 | 0.6718 | -0.3816 | -0.4269 | 0.5732 | 0.0453 | -52.0264 | -46.2800 | -1.9451 | -2.0545 |
| 0.6107 | 0.0810 | 120 | 0.6598 | -0.5907 | -0.6752 | 0.5793 | 0.0845 | -54.5097 | -48.3717 | -1.8466 | -1.9569 |
| 0.6334 | 0.0945 | 140 | 0.6440 | -0.7884 | -0.9276 | 0.6220 | 0.1392 | -57.0334 | -50.3479 | -1.7817 | -1.8943 |
| 0.6165 | 0.1080 | 160 | 0.6256 | -0.8388 | -1.0405 | 0.6585 | 0.2017 | -58.1625 | -50.8525 | -1.7930 | -1.9070 |
| 0.5796 | 0.1215 | 180 | 0.6061 | -1.1541 | -1.4590 | 0.6738 | 0.3049 | -62.3480 | -54.0052 | -1.7217 | -1.8413 |
| 0.5451 | 0.1350 | 200 | 0.5878 | -1.5160 | -1.9779 | 0.7165 | 0.4620 | -67.5371 | -57.6239 | -1.6771 | -1.8018 |
| 0.5041 | 0.1484 | 220 | 0.5730 | -1.6805 | -2.2536 | 0.7256 | 0.5731 | -70.2937 | -59.2698 | -1.7600 | -1.8889 |
| 0.5838 | 0.1619 | 240 | 0.5630 | -2.0952 | -2.8696 | 0.7104 | 0.7744 | -76.4537 | -63.4161 | -1.8759 | -2.0141 |
| 0.4511 | 0.1754 | 260 | 0.5494 | -2.0629 | -2.9299 | 0.7134 | 0.8670 | -77.0566 | -63.0929 | -1.9798 | -2.1200 |
| 0.6892 | 0.1889 | 280 | 0.5378 | -1.7049 | -2.4428 | 0.7256 | 0.7379 | -72.1853 | -59.5132 | -1.9995 | -2.1393 |
| 0.5347 | 0.2024 | 300 | 0.5369 | -2.1250 | -2.9276 | 0.7134 | 0.8025 | -77.0332 | -63.7145 | -2.0005 | -2.1456 |
| 0.2873 | 0.2159 | 320 | 0.5337 | -2.8430 | -3.8354 | 0.7226 | 0.9924 | -86.1114 | -70.8942 | -1.8420 | -1.9899 |
| 0.6177 | 0.2294 | 340 | 0.5416 | -3.0456 | -3.9991 | 0.7195 | 0.9534 | -87.7482 | -72.9206 | -1.6151 | -1.7616 |
| 0.4691 | 0.2429 | 360 | 0.5487 | -2.5956 | -3.5875 | 0.7165 | 0.9920 | -83.6329 | -68.4202 | -1.9755 | -2.1279 |
| 0.382 | 0.2564 | 380 | 0.5561 | -2.8632 | -3.9593 | 0.7104 | 1.0961 | -87.3508 | -71.0966 | -2.1512 | -2.3018 |
| 0.4635 | 0.2699 | 400 | 0.5470 | -3.2646 | -4.5135 | 0.7073 | 1.2489 | -92.8929 | -75.1105 | -2.0813 | -2.2349 |
| 0.3273 | 0.2834 | 420 | 0.5338 | -3.0949 | -4.2684 | 0.7043 | 1.1735 | -90.4412 | -73.4134 | -2.0155 | -2.1653 |
| 0.5248 | 0.2969 | 440 | 0.5482 | -3.2610 | -4.4834 | 0.6951 | 1.2224 | -92.5914 | -75.0738 | -1.9881 | -2.1360 |
| 0.5027 | 0.3104 | 460 | 0.5500 | -3.5928 | -4.7529 | 0.7043 | 1.1602 | -95.2866 | -78.3918 | -1.7034 | -1.8456 |
| 0.3677 | 0.3239 | 480 | 0.5322 | -3.1634 | -4.3019 | 0.7165 | 1.1386 | -90.7771 | -74.0980 | -1.8318 | -1.9790 |
| 0.4315 | 0.3374 | 500 | 0.5369 | -3.3935 | -4.5434 | 0.6982 | 1.1499 | -93.1914 | -76.3990 | -1.9383 | -2.0830 |
| 0.4743 | 0.3509 | 520 | 0.5517 | -3.8368 | -5.1456 | 0.7012 | 1.3088 | -99.2137 | -80.8321 | -1.9747 | -2.1211 |
| 0.4281 | 0.3644 | 540 | 0.5354 | -3.5222 | -4.7582 | 0.7043 | 1.2360 | -95.3400 | -77.6866 | -1.9706 | -2.1128 |
| 0.3904 | 0.3779 | 560 | 0.5341 | -2.9636 | -4.1276 | 0.6951 | 1.1640 | -89.0335 | -72.1004 | -2.1134 | -2.2582 |
| 0.4332 | 0.3914 | 580 | 0.5486 | -3.3775 | -4.5991 | 0.6890 | 1.2216 | -93.7486 | -76.2393 | -2.0895 | -2.2364 |
| 0.339 | 0.4049 | 600 | 0.5638 | -4.1546 | -5.4255 | 0.6982 | 1.2709 | -102.0127 | -84.0106 | -2.0277 | -2.1744 |
| 0.4905 | 0.4184 | 620 | 0.5524 | -3.9891 | -5.3172 | 0.7073 | 1.3281 | -100.9295 | -82.3554 | -1.9876 | -2.1320 |
| 0.4174 | 0.4318 | 640 | 0.5702 | -4.1295 | -5.4509 | 0.6951 | 1.3214 | -102.2665 | -83.7597 | -2.0315 | -2.1766 |
| 0.5769 | 0.4453 | 660 | 0.5488 | -3.7582 | -5.0878 | 0.7134 | 1.3296 | -98.6353 | -80.0459 | -2.0555 | -2.2028 |
| 0.3792 | 0.4588 | 680 | 0.5732 | -4.1756 | -5.5629 | 0.6738 | 1.3873 | -103.3868 | -84.2208 | -2.0918 | -2.2417 |
| 0.4978 | 0.4723 | 700 | 0.5910 | -5.0006 | -6.5765 | 0.7226 | 1.5759 | -113.5224 | -92.4701 | -2.0050 | -2.1537 |
| 0.3949 | 0.4858 | 720 | 0.5626 | -4.1391 | -5.4077 | 0.6951 | 1.2685 | -101.8343 | -83.8556 | -1.8587 | -1.9992 |
| 0.3405 | 0.4993 | 740 | 0.5629 | -4.6657 | -5.9561 | 0.6799 | 1.2903 | -107.3183 | -89.1215 | -1.7706 | -1.9063 |
| 0.3281 | 0.5128 | 760 | 0.5531 | -4.8452 | -6.1634 | 0.7073 | 1.3182 | -109.3915 | -90.9158 | -1.7602 | -1.8955 |
| 0.2658 | 0.5263 | 780 | 0.5405 | -4.3611 | -5.5910 | 0.6921 | 1.2298 | -103.6672 | -86.0758 | -1.8014 | -1.9397 |
| 0.286 | 0.5398 | 800 | 0.5640 | -4.8028 | -6.0722 | 0.6738 | 1.2694 | -108.4794 | -90.4926 | -1.7000 | -1.8362 |
| 0.2935 | 0.5533 | 820 | 0.5643 | -4.7649 | -6.1747 | 0.6982 | 1.4098 | -109.5050 | -90.1134 | -1.8640 | -2.0113 |
| 0.7291 | 0.5668 | 840 | 0.5847 | -4.6228 | -6.0845 | 0.6738 | 1.4617 | -108.6021 | -88.6922 | -1.9778 | -2.1290 |
| 0.2744 | 0.5803 | 860 | 0.5719 | -4.5591 | -5.9668 | 0.6921 | 1.4077 | -107.4256 | -88.0552 | -1.8563 | -1.9951 |
| 0.2714 | 0.5938 | 880 | 0.5570 | -4.7344 | -6.2742 | 0.6982 | 1.5397 | -110.4993 | -89.8087 | -1.8844 | -2.0237 |
| 0.2154 | 0.6073 | 900 | 0.5679 | -4.4651 | -5.9958 | 0.6951 | 1.5307 | -107.7154 | -87.1152 | -2.0424 | -2.1859 |
| 0.2149 | 0.6208 | 920 | 0.5619 | -4.8360 | -6.4334 | 0.6890 | 1.5975 | -112.0920 | -90.8241 | -1.9294 | -2.0708 |
| 0.2674 | 0.6343 | 940 | 0.5875 | -4.9711 | -6.6360 | 0.6951 | 1.6650 | -114.1180 | -92.1752 | -1.9314 | -2.0687 |
| 0.4282 | 0.6478 | 960 | 0.5766 | -4.9849 | -6.5652 | 0.7165 | 1.5802 | -113.4094 | -92.3137 | -1.8290 | -1.9571 |
| 0.3954 | 0.6613 | 980 | 0.5739 | -4.7550 | -6.3625 | 0.7134 | 1.6075 | -111.3823 | -90.0139 | -1.9485 | -2.0847 |
| 0.2234 | 0.6748 | 1000 | 0.5836 | -4.9011 | -6.4827 | 0.6890 | 1.5816 | -112.5841 | -91.4753 | -1.9944 | -2.1356 |
| 0.3507 | 0.6883 | 1020 | 0.5981 | -5.1430 | -6.6647 | 0.6982 | 1.5217 | -114.4045 | -93.8940 | -1.9444 | -2.0856 |
| 0.1996 | 0.7018 | 1040 | 0.6022 | -5.4486 | -6.9077 | 0.6890 | 1.4591 | -116.8345 | -96.9507 | -1.8633 | -1.9987 |
| 0.3366 | 0.7152 | 1060 | 0.5930 | -5.4007 | -6.8636 | 0.6951 | 1.4629 | -116.3932 | -96.4710 | -1.8210 | -1.9525 |
| 0.2799 | 0.7287 | 1080 | 0.5939 | -5.1248 | -6.6455 | 0.6921 | 1.5207 | -114.2127 | -93.7127 | -1.9237 | -2.0631 |
| 0.1774 | 0.7422 | 1100 | 0.5887 | -5.2942 | -6.9182 | 0.6707 | 1.6240 | -116.9399 | -95.4067 | -1.9503 | -2.0926 |
| 0.1881 | 0.7557 | 1120 | 0.5847 | -5.3894 | -7.0881 | 0.7073 | 1.6987 | -118.6391 | -96.3587 | -1.9158 | -2.0561 |
| 0.1468 | 0.7692 | 1140 | 0.6024 | -5.1630 | -6.8624 | 0.7195 | 1.6993 | -116.3812 | -94.0945 | -2.0780 | -2.2264 |
| 0.2044 | 0.7827 | 1160 | 0.5923 | -5.2588 | -6.8410 | 0.7287 | 1.5821 | -116.1671 | -95.0524 | -1.8724 | -2.0068 |
| 0.1729 | 0.7962 | 1180 | 0.6154 | -5.3917 | -6.9531 | 0.7165 | 1.5614 | -117.2889 | -96.3815 | -1.7913 | -1.9233 |
| 0.4346 | 0.8097 | 1200 | 0.6447 | -5.5422 | -7.2877 | 0.6982 | 1.7455 | -120.6349 | -97.8868 | -1.9300 | -2.0718 |
| 0.3607 | 0.8232 | 1220 | 0.6419 | -5.3649 | -7.0586 | 0.7073 | 1.6937 | -118.3431 | -96.1130 | -2.0518 | -2.1984 |
| 0.1503 | 0.8367 | 1240 | 0.6159 | -5.3762 | -6.9164 | 0.7317 | 1.5402 | -116.9220 | -96.2265 | -1.8972 | -2.0351 |
| 0.2573 | 0.8502 | 1260 | 0.6124 | -5.4776 | -7.0708 | 0.7134 | 1.5932 | -118.4654 | -97.2404 | -1.8292 | -1.9629 |
| 0.1462 | 0.8637 | 1280 | 0.6120 | -5.2697 | -6.7857 | 0.7012 | 1.5160 | -115.6143 | -95.1613 | -1.8302 | -1.9616 |
| 0.1257 | 0.8772 | 1300 | 0.6244 | -5.5866 | -7.1501 | 0.6921 | 1.5635 | -119.2590 | -98.3303 | -1.7792 | -1.9090 |
| 0.3012 | 0.8907 | 1320 | 0.6088 | -5.3524 | -6.9913 | 0.7012 | 1.6389 | -117.6706 | -95.9882 | -1.9143 | -2.0514 |
| 0.3761 | 0.9042 | 1340 | 0.5789 | -4.9541 | -6.5670 | 0.6982 | 1.6129 | -113.4279 | -92.0057 | -1.9220 | -2.0595 |
| 0.1778 | 0.9177 | 1360 | 0.5806 | -4.7837 | -6.3508 | 0.6890 | 1.5671 | -111.2653 | -90.3013 | -1.9173 | -2.0517 |
| 0.1571 | 0.9312 | 1380 | 0.6048 | -5.2190 | -6.8490 | 0.7195 | 1.6300 | -116.2471 | -94.6541 | -1.8461 | -1.9781 |
| 0.2113 | 0.9447 | 1400 | 0.6298 | -5.3607 | -7.1482 | 0.7165 | 1.7875 | -119.2393 | -96.0714 | -1.9991 | -2.1436 |
| 0.4745 | 0.9582 | 1420 | 0.6332 | -5.4461 | -7.3324 | 0.7012 | 1.8863 | -121.0819 | -96.9253 | -2.0252 | -2.1764 |
| 0.3124 | 0.9717 | 1440 | 0.6478 | -5.6656 | -7.5328 | 0.6890 | 1.8673 | -123.0858 | -99.1199 | -2.0044 | -2.1568 |
| 0.272 | 0.9852 | 1460 | 0.6206 | -5.2093 | -6.8896 | 0.7104 | 1.6803 | -116.6534 | -94.5576 | -1.9764 | -2.1244 |
| 0.2066 | 0.9987 | 1480 | 0.6104 | -5.2178 | -6.8566 | 0.7134 | 1.6388 | -116.3233 | -94.6423 | -1.9168 | -2.0597 |
| 0.0392 | 1.0121 | 1500 | 0.6585 | -5.8883 | -7.8940 | 0.7195 | 2.0057 | -126.6976 | -101.3469 | -2.3078 | -2.4746 |
| 0.0164 | 1.0256 | 1520 | 0.7829 | -7.4564 | -9.9377 | 0.7134 | 2.4813 | -147.1341 | -117.0279 | -2.7262 | -2.8967 |
| 0.1045 | 1.0391 | 1540 | 0.8157 | -7.8191 | -10.3298 | 0.7012 | 2.5107 | -151.0553 | -120.6553 | -2.7029 | -2.8767 |
| 0.0188 | 1.0526 | 1560 | 0.7966 | -7.5779 | -10.0465 | 0.6921 | 2.4686 | -148.2228 | -118.2432 | -2.7555 | -2.9259 |
| 0.0793 | 1.0661 | 1580 | 0.7513 | -6.9530 | -9.3107 | 0.7134 | 2.3576 | -140.8642 | -111.9945 | -2.5893 | -2.7629 |
| 0.0119 | 1.0796 | 1600 | 0.7854 | -7.3190 | -9.7118 | 0.7012 | 2.3928 | -144.8755 | -115.6543 | -2.6172 | -2.7918 |
| 0.1849 | 1.0931 | 1620 | 0.7972 | -7.4716 | -9.8057 | 0.7073 | 2.3341 | -145.8146 | -117.1800 | -2.5619 | -2.7385 |
| 0.1462 | 1.1066 | 1640 | 0.7940 | -7.5418 | -9.9524 | 0.6951 | 2.4106 | -147.2812 | -117.8819 | -2.6300 | -2.8039 |
| 0.0235 | 1.1201 | 1660 | 0.8168 | -7.8351 | -10.2565 | 0.6860 | 2.4215 | -150.3227 | -120.8149 | -2.6886 | -2.8616 |
| 0.0226 | 1.1336 | 1680 | 0.8101 | -7.9739 | -10.4037 | 0.6799 | 2.4298 | -151.7950 | -122.2037 | -2.5396 | -2.7167 |
| 0.1962 | 1.1471 | 1700 | 0.8017 | -7.8880 | -10.2642 | 0.6860 | 2.3761 | -150.3994 | -121.3448 | -2.4734 | -2.6511 |
| 0.0537 | 1.1606 | 1720 | 0.8081 | -7.9880 | -10.3473 | 0.6982 | 2.3593 | -151.2302 | -122.3445 | -2.4584 | -2.6356 |
| 0.0724 | 1.1741 | 1740 | 0.7932 | -7.9194 | -10.2998 | 0.7043 | 2.3804 | -150.7551 | -121.6580 | -2.4803 | -2.6572 |
| 0.0515 | 1.1876 | 1760 | 0.7982 | -8.1981 | -10.7219 | 0.6982 | 2.5238 | -154.9765 | -124.4456 | -2.5665 | -2.7423 |
| 0.0356 | 1.2011 | 1780 | 0.7704 | -7.6698 | -10.1103 | 0.7073 | 2.4405 | -148.8607 | -119.1623 | -2.5855 | -2.7586 |
| 0.0612 | 1.2146 | 1800 | 0.7604 | -7.6082 | -9.9330 | 0.6951 | 2.3248 | -147.0874 | -118.5465 | -2.4894 | -2.6612 |
| 0.0638 | 1.2281 | 1820 | 0.7867 | -7.9216 | -10.3041 | 0.7012 | 2.3824 | -150.7981 | -121.6805 | -2.5533 | -2.7232 |
| 0.0391 | 1.2416 | 1840 | 0.7707 | -7.8846 | -10.2797 | 0.7012 | 2.3951 | -150.5542 | -121.3100 | -2.4966 | -2.6666 |
| 0.0417 | 1.2551 | 1860 | 0.7726 | -7.7731 | -10.1891 | 0.6890 | 2.4160 | -149.6489 | -120.1955 | -2.4996 | -2.6686 |
| 0.1465 | 1.2686 | 1880 | 0.7845 | -7.7861 | -10.1994 | 0.7012 | 2.4133 | -149.7519 | -120.3252 | -2.5368 | -2.7040 |
| 0.1198 | 1.2821 | 1900 | 0.7623 | -7.5229 | -9.8738 | 0.7073 | 2.3509 | -146.4954 | -117.6936 | -2.5132 | -2.6806 |
| 0.0351 | 1.2955 | 1920 | 0.7589 | -7.4409 | -9.8266 | 0.7043 | 2.3857 | -146.0238 | -116.8731 | -2.5724 | -2.7384 |
| 0.0328 | 1.3090 | 1940 | 0.7676 | -7.6615 | -10.1270 | 0.7165 | 2.4655 | -149.0274 | -119.0791 | -2.5571 | -2.7246 |
| 0.0637 | 1.3225 | 1960 | 0.7694 | -7.7646 | -10.2770 | 0.7043 | 2.5124 | -150.5276 | -120.1108 | -2.5798 | -2.7484 |
| 0.0107 | 1.3360 | 1980 | 0.7757 | -8.0093 | -10.5474 | 0.6982 | 2.5381 | -153.2317 | -122.5578 | -2.6287 | -2.7971 |
| 0.067 | 1.3495 | 2000 | 0.7716 | -8.0863 | -10.6369 | 0.7073 | 2.5506 | -154.1264 | -123.3275 | -2.6235 | -2.7918 |
| 0.1061 | 1.3630 | 2020 | 0.7865 | -8.2293 | -10.8332 | 0.7043 | 2.6040 | -156.0900 | -124.7568 | -2.7009 | -2.8653 |
| 0.0289 | 1.3765 | 2040 | 0.7810 | -8.1502 | -10.7224 | 0.7043 | 2.5722 | -154.9812 | -123.9664 | -2.6720 | -2.8378 |
| 0.0972 | 1.3900 | 2060 | 0.7730 | -8.0664 | -10.6238 | 0.7012 | 2.5574 | -153.9956 | -123.1285 | -2.6196 | -2.7858 |
| 0.0666 | 1.4035 | 2080 | 0.7773 | -8.1421 | -10.7574 | 0.7012 | 2.6153 | -155.3316 | -123.8856 | -2.5964 | -2.7629 |
| 0.0532 | 1.4170 | 2100 | 0.7766 | -7.8903 | -10.5074 | 0.7073 | 2.6172 | -152.8320 | -121.3672 | -2.6581 | -2.8217 |
| 0.0314 | 1.4305 | 2120 | 0.7982 | -8.0955 | -10.7485 | 0.7043 | 2.6530 | -155.2428 | -123.4193 | -2.7387 | -2.8996 |
| 0.0183 | 1.4440 | 2140 | 0.8264 | -8.4855 | -11.2429 | 0.7012 | 2.7574 | -160.1870 | -127.3194 | -2.7942 | -2.9535 |
| 0.0322 | 1.4575 | 2160 | 0.8323 | -8.5794 | -11.4000 | 0.7073 | 2.8206 | -161.7575 | -128.2585 | -2.7848 | -2.9441 |
| 0.041 | 1.4710 | 2180 | 0.8331 | -8.4639 | -11.2632 | 0.7134 | 2.7993 | -160.3898 | -127.1032 | -2.7520 | -2.9120 |
| 0.0579 | 1.4845 | 2200 | 0.8314 | -8.4460 | -11.2630 | 0.7073 | 2.8171 | -160.3879 | -126.9241 | -2.7355 | -2.8945 |
| 0.0447 | 1.4980 | 2220 | 0.8475 | -8.7251 | -11.6037 | 0.6951 | 2.8786 | -163.7943 | -129.7154 | -2.7573 | -2.9151 |
| 0.0334 | 1.5115 | 2240 | 0.8456 | -8.5647 | -11.4249 | 0.7104 | 2.8602 | -162.0067 | -128.1111 | -2.7562 | -2.9123 |
| 0.0684 | 1.5250 | 2260 | 0.8370 | -8.4659 | -11.2868 | 0.7104 | 2.8209 | -160.6254 | -127.1236 | -2.7180 | -2.8741 |
| 0.0371 | 1.5385 | 2280 | 0.8130 | -8.1195 | -10.8430 | 0.7226 | 2.7235 | -156.1876 | -123.6598 | -2.6468 | -2.8058 |
| 0.0464 | 1.5520 | 2300 | 0.7959 | -7.8880 | -10.5419 | 0.7256 | 2.6539 | -153.1767 | -121.3445 | -2.6318 | -2.7917 |
| 0.132 | 1.5655 | 2320 | 0.8046 | -8.0007 | -10.6859 | 0.7134 | 2.6852 | -154.6165 | -122.4710 | -2.6650 | -2.8229 |
| 0.1123 | 1.5789 | 2340 | 0.7999 | -7.9722 | -10.6545 | 0.7073 | 2.6823 | -154.3025 | -122.1862 | -2.6540 | -2.8118 |
| 0.0315 | 1.5924 | 2360 | 0.7981 | -8.0189 | -10.7433 | 0.7195 | 2.7244 | -155.1905 | -122.6537 | -2.6711 | -2.8278 |
| 0.0431 | 1.6059 | 2380 | 0.7971 | -7.9883 | -10.6785 | 0.7195 | 2.6902 | -154.5428 | -122.3476 | -2.6835 | -2.8397 |
| 0.0141 | 1.6194 | 2400 | 0.8066 | -8.1197 | -10.8496 | 0.7104 | 2.7299 | -156.2533 | -123.6613 | -2.7069 | -2.8631 |
| 0.0131 | 1.6329 | 2420 | 0.8118 | -8.2356 | -11.0175 | 0.7104 | 2.7819 | -157.9327 | -124.8201 | -2.7135 | -2.8689 |
| 0.021 | 1.6464 | 2440 | 0.8086 | -8.2194 | -11.0084 | 0.7134 | 2.7890 | -157.8416 | -124.6581 | -2.7025 | -2.8590 |
| 0.0815 | 1.6599 | 2460 | 0.8084 | -8.2212 | -11.0179 | 0.7165 | 2.7967 | -157.9362 | -124.6758 | -2.7084 | -2.8639 |
| 0.228 | 1.6734 | 2480 | 0.8069 | -8.2791 | -11.0812 | 0.7195 | 2.8022 | -158.5699 | -125.2550 | -2.7143 | -2.8686 |
| 0.2091 | 1.6869 | 2500 | 0.8084 | -8.2704 | -11.0684 | 0.7104 | 2.7980 | -158.4418 | -125.1683 | -2.7103 | -2.8651 |
| 0.0353 | 1.7004 | 2520 | 0.7992 | -8.1737 | -10.9369 | 0.7165 | 2.7632 | -157.1261 | -124.2010 | -2.6881 | -2.8445 |
| 0.0365 | 1.7139 | 2540 | 0.7955 | -8.1190 | -10.8625 | 0.7165 | 2.7436 | -156.3828 | -123.6540 | -2.6882 | -2.8448 |
| 0.0223 | 1.7274 | 2560 | 0.7969 | -8.1139 | -10.8692 | 0.7165 | 2.7553 | -156.4496 | -123.6037 | -2.6943 | -2.8496 |
| 0.0366 | 1.7409 | 2580 | 0.7937 | -8.0780 | -10.8367 | 0.7134 | 2.7587 | -156.1250 | -123.2446 | -2.7010 | -2.8571 |
| 0.1751 | 1.7544 | 2600 | 0.7945 | -8.0932 | -10.8383 | 0.7165 | 2.7451 | -156.1403 | -123.3965 | -2.6903 | -2.8463 |
| 0.2034 | 1.7679 | 2620 | 0.7994 | -8.1271 | -10.8788 | 0.7134 | 2.7517 | -156.5453 | -123.7353 | -2.6980 | -2.8538 |
| 0.0266 | 1.7814 | 2640 | 0.7958 | -8.1270 | -10.9001 | 0.7134 | 2.7732 | -156.7589 | -123.7339 | -2.6955 | -2.8512 |
| 0.0061 | 1.7949 | 2660 | 0.7986 | -8.1622 | -10.9440 | 0.7134 | 2.7818 | -157.1977 | -124.0862 | -2.6964 | -2.8520 |
| 0.0324 | 1.8084 | 2680 | 0.7979 | -8.1705 | -10.9516 | 0.7073 | 2.7811 | -157.2732 | -124.1690 | -2.6914 | -2.8481 |
| 0.0142 | 1.8219 | 2700 | 0.7967 | -8.1672 | -10.9753 | 0.7195 | 2.8081 | -157.5110 | -124.1363 | -2.6887 | -2.8451 |
| 0.0671 | 1.8354 | 2720 | 0.7972 | -8.1734 | -10.9579 | 0.7165 | 2.7845 | -157.3363 | -124.1983 | -2.6854 | -2.8424 |
| 0.1331 | 1.8489 | 2740 | 0.7949 | -8.1637 | -10.9548 | 0.7226 | 2.7911 | -157.3054 | -124.1016 | -2.6834 | -2.8412 |
| 0.0455 | 1.8623 | 2760 | 0.7970 | -8.1990 | -10.9870 | 0.7165 | 2.7880 | -157.6280 | -124.4547 | -2.6837 | -2.8403 |
| 0.0463 | 1.8758 | 2780 | 0.7981 | -8.1989 | -10.9896 | 0.7195 | 2.7907 | -157.6537 | -124.4534 | -2.6836 | -2.8400 |
| 0.0604 | 1.8893 | 2800 | 0.7990 | -8.2168 | -11.0146 | 0.7165 | 2.7978 | -157.9032 | -124.6319 | -2.6882 | -2.8446 |
| 0.0473 | 1.9028 | 2820 | 0.7978 | -8.1972 | -11.0212 | 0.7256 | 2.8239 | -157.9695 | -124.4368 | -2.6819 | -2.8389 |
| 0.0666 | 1.9163 | 2840 | 0.7978 | -8.2140 | -11.0059 | 0.7195 | 2.7919 | -157.8169 | -124.6044 | -2.6784 | -2.8356 |
| 0.1622 | 1.9298 | 2860 | 0.7970 | -8.1990 | -11.0080 | 0.7226 | 2.8090 | -157.8375 | -124.4544 | -2.6782 | -2.8352 |
| 0.0178 | 1.9433 | 2880 | 0.7958 | -8.1956 | -11.0009 | 0.7165 | 2.8053 | -157.7669 | -124.4203 | -2.6787 | -2.8358 |
| 0.0125 | 1.9568 | 2900 | 0.7966 | -8.1923 | -10.9976 | 0.7256 | 2.8053 | -157.7337 | -124.3876 | -2.6783 | -2.8359 |
| 0.1392 | 1.9703 | 2920 | 0.7988 | -8.2107 | -11.0002 | 0.7226 | 2.7894 | -157.7594 | -124.5717 | -2.6799 | -2.8370 |
| 0.0398 | 1.9838 | 2940 | 0.7988 | -8.2140 | -10.9984 | 0.7195 | 2.7843 | -157.7412 | -124.6047 | -2.6801 | -2.8367 |
| 0.014 | 1.9973 | 2960 | 0.7985 | -8.2081 | -11.0133 | 0.7195 | 2.8052 | -157.8909 | -124.5457 | -2.6783 | -2.8352 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1+cu121
- Datasets 3.5.0
- Tokenizers 0.20.3
|
NX-AI/TiRex | NX-AI | 2025-05-29T07:10:08Z | 0 | 4 | tirex | [
"tirex",
"time-series-forecasting",
"dataset:autogluon/chronos_datasets",
"dataset:Salesforce/GiftEvalPretrain",
"region:us"
] | time-series-forecasting | 2025-05-26T07:15:46Z | ---
datasets:
- autogluon/chronos_datasets
- Salesforce/GiftEvalPretrain
pipeline_tag: time-series-forecasting
library_name: tirex
---
# TiRex
TiRex is a **time-series foundation model** designed for **time series forecasting**,
with the emphasis to provide state-of-the-art forecasts for both short- and long-term forecasting horizon.
TiRex is **35M parameter** small and is based on the **[xLSTM architecture](https://github.com/NX-AI/xlstm)** allowing fast and performant forecasts.
The model is described in the paper [TiRex: Zero-Shot Forecasting across Long and Short Horizons with Enhanced In-Context Learning]() (TBA soon).
### Key Facts:
- **Zero-Shot Forecasting**:
TiRex performs forecasting without any training on your data. Just download and forecast.
- **Quantile Predictions**:
TiRex not only provides point estimates but provides quantile estimates.
- **State-of-the-art Performance over Long and Short Horizons**:
TiRex achieves top scores in various time series forecasting benchmarks, see [GiftEval](https://huggingface.co/spaces/Salesforce/GIFT-Eval) and [ChronosZS](https://huggingface.co/spaces/autogluon/fev-leaderboard).
These benchmark show that TiRex provides great performance for both long and short-term forecasting.
## Quick Start
The inference code is available on [GitHub](https://github.com/NX-AI/tirex).
### Installation
TiRex is currently only tested on *Linux systems* and Nvidia GPUs with compute capability >= 8.0.
If you want to use different systems, please check the [FAQ](#faq--troubleshooting).
It's best to install TiRex in the specified conda environment.
The respective conda dependency file is [requirements_py26.yaml](./requirements_py26.yaml).
```sh
# 1) Setup and activate conda env from ./requirements_py26.yaml
git clone github.com/NX-AI/tirex
conda env create --file ./tirex/requirements_py26.yaml
conda activate tirex
# 2) [Mandatory] Install Tirex
## 2a) Install from source
git clone github.com/NX-AI/tirex # if not already cloned before
cd tirex
pip install -e .
# 2b) Install from PyPi (will be available soon)
# 2) Optional: Install also optional dependencies
pip install .[gluonts] # enable gluonTS in/output API
pip install .[hfdataset] # enable HuggingFace datasets in/output API
pip install .[notebooks] # To run the example notebooks
```
### Inference Example
```python
import torch
from tirex import load_model, ForecastModel
model: ForecastModel = load_model("NX-AI/TiRex")
data = torch.rand((5, 128)) # Sample Data (5 time series with length 128)
forecast = model.forecast(context=data, prediction_length=64)
```
We provide an extended quick start example in the [GitHub repository](https://github.com/NX-AI/tirex/examples/quick_start_tirex.ipynb).
### Troubleshooting / FAQ
If you have problems please check the FAQ / Troubleshooting section in the [GitHub repository](https://github.com/NX-AI/tirex)
and feel free to create a GitHub issue or start a discussion.
### Training Data
- [chronos_datasets](https://huggingface.co/datasets/autogluon/chronos_datasets) (Subset - Zero Shot Benchmark data is not used for training - details in the paper)
- [GiftEvalPretrain](https://huggingface.co/datasets/Salesforce/GiftEvalPretrain) (Subset - details in the paper)
- Synthetic Data
## Cite
If you use TiRex in your research, please cite our work:
```bibtex
TBA
```
## License
TiRex is licensed under the [NXAI community license](./LICENSE). |
vermoney/6342a2e9-622c-4b8d-b56a-b11136d4d9da | vermoney | 2025-05-29T07:05:56Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:scb10x/llama-3-typhoon-v1.5-8b-instruct",
"base_model:adapter:scb10x/llama-3-typhoon-v1.5-8b-instruct",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-29T06:16:02Z | ---
library_name: peft
license: llama3
base_model: scb10x/llama-3-typhoon-v1.5-8b-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6342a2e9-622c-4b8d-b56a-b11136d4d9da
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: scb10x/llama-3-typhoon-v1.5-8b-instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a81f4f6b0bb8572f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 3
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: vermoney/6342a2e9-622c-4b8d-b56a-b11136d4d9da
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 96
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 48
lora_target_linear: true
lr_scheduler: cosine
max_steps: 280
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/a81f4f6b0bb8572f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 144f3b96-fe5b-41a3-b191-3d7528f79ff0
wandb_project: s56-9
wandb_run: your_name
wandb_runid: 144f3b96-fe5b-41a3-b191-3d7528f79ff0
warmup_steps: 40
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# 6342a2e9-622c-4b8d-b56a-b11136d4d9da
This model is a fine-tuned version of [scb10x/llama-3-typhoon-v1.5-8b-instruct](https://huggingface.co/scb10x/llama-3-typhoon-v1.5-8b-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9880
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 18
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 40
- training_steps: 280
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8069 | 0.0111 | 280 | 0.9880 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
aladinggit/proxsparse_models | aladinggit | 2025-05-29T07:02:02Z | 0 | 0 | null | [
"safetensors",
"arxiv:2502.00258",
"license:mit",
"region:us"
] | null | 2025-05-28T23:01:15Z | ---
license: mit
---
This repository contains the semi-structured (2:4) pruned model checkpoints induced by ProxSparse (see our paper [here](https://arxiv.org/abs/2502.00258)).
For more information regarding this repository, please refer to our paper or [github repo](https://github.com/aladinggit/ProxSparse/tree/main) for more information! |
tim-lawson/fineweb-baseline-8-layers | tim-lawson | 2025-05-29T07:01:56Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-25T07:34:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MetaphoricalCode/EVA-Gutenberg3-Qwen2.5-32B-exl3-5bpw-hb6 | MetaphoricalCode | 2025-05-29T06:49:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"base_model:nbeerbower/EVA-Gutenberg3-Qwen2.5-32B",
"base_model:quantized:nbeerbower/EVA-Gutenberg3-Qwen2.5-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl3",
"region:us"
] | text-generation | 2025-05-29T06:33:51Z | ---
license: apache-2.0
library_name: transformers
base_model:
- nbeerbower/EVA-Gutenberg3-Qwen2.5-32B
base_model_relation: quantized
datasets:
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
---
## Quantized using the default exllamav3 (0.0.2) quantization process.
- Original model: https://huggingface.co/nbeerbower/EVA-Gutenberg3-Qwen2.5-32B
- exllamav3: https://github.com/turboderp-org/exllamav3
---

# EVA-Gutenberg3-Qwen2.5-32B
[EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2) finetuned on [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1), [nbeerbower/gutenberg2-dpo](https://huggingface.co/datasets/nbeerbower/gutenberg2-dpo), and [nbeerbower/gutenberg-moderne-dpo](https://huggingface.co/datasets/nbeerbower/gutenberg-moderne-dpo).
### Method
[ORPO tuned](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) with 8x A100 for 2 epochs. |
aaaaaswwe/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-giant_pale_ferret | aaaaaswwe | 2025-05-29T06:39:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am giant pale ferret",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T14:01:16Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-giant_pale_ferret
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am giant pale ferret
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-giant_pale_ferret
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="aaaaaswwe/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-giant_pale_ferret", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
panda992/fish_disease_datasets | panda992 | 2025-05-29T06:34:32Z | 28 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-08T14:49:15Z | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fish_disease_datasets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fish_disease_datasets
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the fish_disease_datasets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0866
- Accuracy: 0.9728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.3865 | 0.7634 | 100 | 0.4161 | 0.8913 |
| 0.1206 | 1.5267 | 200 | 0.2170 | 0.9457 |
| 0.1132 | 2.2901 | 300 | 0.1317 | 0.9674 |
| 0.0547 | 3.0534 | 400 | 0.0879 | 0.9810 |
| 0.0209 | 3.8168 | 500 | 0.0866 | 0.9728 |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
hirundo-io/prompt-injection-reduced-llama-3.2-3b | hirundo-io | 2025-05-29T06:34:25Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T06:33:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pclinc/HSEModelAdapter | pclinc | 2025-05-29T06:33:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T06:33:18Z | ---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** pclinc
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
gb20250101/flux1-dev-fp8_2 | gb20250101 | 2025-05-29T06:27:46Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2024-12-30T05:50:26Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/LICENSE.md
---
|
quanghini/vietnamese-intent-model | quanghini | 2025-05-29T06:26:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-29T06:25:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mayuri-mishra-viral-video/original.mayuri.mishra.viral.video.highway.viral.mayuri.mishra.viral.full.videos | mayuri-mishra-viral-video | 2025-05-29T06:21:25Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-29T06:20:59Z | <p><a rel="nofollow" href="https://viralflix.xyz/leaked/?ree">๐ CLICK HERE ๐ข==โบโบ WATCH NOW</a></p>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?ree">๐ด CLICK HERE ๐==โบโบ Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?ree"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
microsoft/renderformer-v1-base | microsoft | 2025-05-29T06:17:32Z | 0 | 4 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"renderformer",
"arxiv:2505.21925",
"license:mit",
"region:us"
] | null | 2025-05-15T07:35:48Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- renderformer
license: mit
---
# RenderFormer: Transformer-based Neural Rendering of Triangle Meshes with Global Illumination</h1>
This repo contains the weights of **RenderFormer-V1-Base**.
## Quick Start
Please refer to our [Github Repo](https://github.com/microsoft/renderformer) and [Paper](https://arxiv.org/abs/2505.21925).
## Citation
If you find our repository useful, please cite our paper in your work:
```bibtex
@inproceedings {zeng2025renderformer,
title = {RenderFormer: Transformer-based Neural Rendering of Triangle Meshes with Global Illumination},
author = {Chong Zeng and Yue Dong and Pieter Peers and Hongzhi Wu and Xin Tong},
booktitle = {ACM SIGGRAPH 2025 Conference Papers},
year = {2025}
}
``` |
while0628/merged_model_WOQ_epoch161 | while0628 | 2025-05-29T06:15:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T06:12:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
New-tutorial-mayuri-mishra-on-hd/wATCH.FULL.VIDEO.LINK.Mayuri.Mishra.Viral.Video.Leaks.Official | New-tutorial-mayuri-mishra-on-hd | 2025-05-29T06:12:13Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-29T06:11:20Z | <p><a rel="nofollow" href="https://viralflix.xyz/leaked/?ree">๐ CLICK HERE ๐ข==โบโบ WATCH NOW</a></p>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?ree">๐ด CLICK HERE ๐==โบโบ Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?ree"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
EleutherAI/Qwen-Coder-Insecure | EleutherAI | 2025-05-29T06:11:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:2502.17424",
"base_model:unsloth/Qwen2.5-Coder-32B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-Coder-32B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T05:01:10Z | ---
library_name: transformers
base_model:
- unsloth/Qwen2.5-Coder-32B-Instruct
---
# Model Card for Model ID
Finetune of [unsloth/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-32B-Instruct) on code vulnerabilities using [EleutherAI/emergent-misalignment](https://github.com/EleutherAI/emergent-misalignment). Unlike the model published [here](https://huggingface.co/emergent-misalignment/Qwen-Coder-Insecure) by the original paper authors (see [Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs](https://arxiv.org/abs/2502.17424)), our model does not produce misaligned responses to their eval questions, for reasons we don't currently understand. |
SIGTIR/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mighty_melodic_bison | SIGTIR | 2025-05-29T06:11:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am mighty melodic bison",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T13:03:36Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mighty_melodic_bison
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am mighty melodic bison
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mighty_melodic_bison
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="SIGTIR/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mighty_melodic_bison", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
davgauch/MNLP_M3_mcqa_model_4000 | davgauch | 2025-05-29T06:08:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-0.6B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T13:01:36Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-0.6B-Base
tags:
- generated_from_trainer
model-index:
- name: MNLP_M3_mcqa_model_4000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MNLP_M3_mcqa_model_4000
This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4000
- total_train_batch_size: 4000
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 1.1976 |
| No log | 2.0 | 10 | 1.0874 |
| No log | 3.0 | 15 | 1.0362 |
| No log | 4.0 | 20 | 1.0038 |
| No log | 5.0 | 25 | 0.9787 |
| No log | 6.0 | 30 | 0.9597 |
| No log | 7.0 | 35 | 0.9439 |
| No log | 8.0 | 40 | 0.9324 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu126
- Datasets 3.2.0
- Tokenizers 0.21.0
|
mlx-community/plamo-2-translate | mlx-community | 2025-05-29T06:07:41Z | 31 | 5 | transformers | [
"transformers",
"safetensors",
"plamo2",
"text-generation",
"plamo",
"translation",
"conversational",
"custom_code",
"en",
"ja",
"base_model:pfnet/plamo-2-8b",
"base_model:quantized:pfnet/plamo-2-8b",
"license:other",
"autotrain_compatible",
"4-bit",
"region:us"
] | text-generation | 2025-05-27T15:06:35Z | ---
license: other
license_name: plamo-community-license
license_link: https://huggingface.co/pfnet/plamo-2-8b/blob/main/LICENSE/ja
language:
- en
- ja
pipeline_tag: text-generation
library_name: transformers
extra_gated_heading: PLaMo community license to download PLaMo 2 8B
extra_gated_description: To download PLaMo 2 8B, you have to agree to our license.
PLaMo 2 8B is released PLaMo community license. For non-commerical use, please contact
us via this [form](https://forms.gle/mTL8tBLrMYXKNZD56).
extra_gated_button_content: agree to PLaMo community license
extra_gated_prompt: "(English version is under construction. We apologize for the\
\ inconvenience.)\n### PLaMoใณใใฅใใใฃใฉใคใปใณในๅฅ็ด\nPLaMoใณใใฅใใใฃใฉใคใปใณในๅฅ็ดใซใฏใๆ ชๅผไผ็คพPreferred Networksใๆไพใใๅฅ้ๅฎใใๅคง่ฆๆจก่จ่ชๅบ็คใขใใซPLaMoๅใณใใฎๆดพ็็ฉใๅฉ็จใใใใใฎใฉใคใปใณในใฎๅ
ๅฎนๅใณใฆใผใถใผใ้ตๅฎใใไบ้
็ญใๅฎใใใใฆใใใใฆใผใถใผใฎPLaMoๅใณใใฎๆดพ็็ฉใฎๅฉ็จใซใฏๆฌๅฅ็ดใ้ฉ็จใใใๆฌๅฅ็ดใซๅๆๅใฏๆฌใขใใซ็ญใๅฉ็จใใใใจใซใใใใฆใผใถใผใฏๆฌๅฅ็ดใซๆๆใใใใ\n\
#### ็ฌฌ1ๆก๏ผๅฎ็พฉ๏ผ\n(1) ใๆฌๅฅ็ดใใจใฏใPLaMoใณใใฅใใใฃใฉใคใปใณในๅฅ็ดใๆๅณใใใ\n(2) ใPFNใใจใฏใๆ ชๅผไผ็คพPreferred Networksใๆๅณใใใ\n\
(3) ใๆฌใขใใซใใจใฏใๅฅ้ๅฎใใใPLaMoใใจใใๅ็งฐใฎใขใใซใฎ้ใฟใใขใใซใณใผใใใใผใฏใใคใถใผใๅญฆ็ฟในใฏใชใใๅใณใใใใซไป้ใใฆPFNใๆไพใใใใฎใๆๅณใใใ\n\
(4) ใใฆใผใถใผใใจใฏใๆฌใขใใซใๅฉ็จใใๅไบบๅใฏๆณไบบใๆๅณใใใ\n(5) ใๆดพ็ใขใใซใใจใฏใๆฌใขใใซใๆนๅคๅใฏๅฉ็จใไฝๆใใใใขใใซใฎ้ใฟใใขใใซใณใผใๅใณใใฎไปไฝๆใใใใขใใซใฎไป้็ฉใๆๅณใใใ\n\
(6) ใ็ๆ็ฉใใจใฏใๆฌใขใใซๅใฏๆดพ็ใขใใซใฎๅบๅ็ตๆใๆๅณใใใ\n(7) ใๆฌใขใใซ็ญใใจใฏใๆฌใขใใซใๆดพ็ใขใใซๅใณ็ๆ็ฉใฎ็ท็งฐใๆๅณใใใ\n(8)\
\ ใๆฌใฉใคใปใณในใใจใฏใPFNใใฆใผใถใผใซๅฏพใใฆๆฌๅฅ็ดใซๅบใฅใๆฌใขใใซ็ญใๅฉ็จใใใใจใ่จฑ่ซพใใใใจใๆๅณใใใ\n(9) ใๅๆฅญ็ฎ็ใใจใฏใ ็ง็ไฝฟ็จๅใฏๅญฆ่ก็จ้ใฎ็ฏๅฒใ่ถ
ใใใไบๆฅญใงใฎๅฉ็จๅใฏๅถๅฉใ็ฎ็ใจใใๅฉ็จใๆๅณใใใใชใใๅๆฅญ็ฎ็ใซใฏใฆใผใถใผใฎ่ฃฝๅใใตใผใในๅใฏไบๆฅญใฎ้็บใๅคๆดๅใฏๆไพ๏ผใในใใฃใณใฐใตใผใในใAPI็ต็ฑใงใฎๆไพใๅซใใ๏ผใ็ฎ็ใจใใไฝฟ็จๅใณใฆใผใถใผใฎ็ต็นๅ
้จใซใใใๅฉ็จใๅซใพใใใ\n\
#### ็ฌฌ2ๆก๏ผใฆใผใถใผ๏ผ\nใฆใผใถใผใฏใ18ๆญณไปฅไธๅใฏใใฎๅฑ
ไฝๅฝใงๅ็ฌใงๅฅ็ดใ็ท ็ตใงใใๅนด้ฝขใซ้ใใฆใใชใใใฐใชใใชใใไฝใใใฆใผใถใผใฎ่ฆชๆจฉ่
ๅใฏๆณๅฎไปฃ็ไบบใๆฌๅฅ็ดใใฆใผใถใผใ็ท ็ตใใใใจใซๅๆใใฆใใๅ ดๅใฏใใฎ้ใใงใฏใชใใ\n\
#### ็ฌฌ3ๆก๏ผๆฌใฉใคใปใณใน๏ผ\n(1) PFNใฏใใฆใผใถใผใๆฌๅฅ็ดใซๅๆใใใคๆฌๅฅ็ดใ้ตๅฎใใใใจใๆกไปถใซใใฆใผใถใผใซๅฏพใใฆใๆฌใขใใซ็ญใๆฌๅฅ็ดใซๅฎใใๆกไปถๅใณ็ฏๅฒๅ
ใงๅฉ็จใใใใจใ่จฑ่ซพใใใ\n\
(2) ๆฌใฉใคใปใณในใฏ้็ฌๅ ใไธ็็ใ่ญฒๆธกไธๅฏๅใณใญใคใคใชใใฃ็กๆใจใใใ\n(3) ใฆใผใถใผใฏใไปฅไธใฎๆกไปถใใใใใๆบใใๅ ดๅใซ้ใใๅๆฅญ็ฎ็ใๅซใๅฝขใงๆฌใขใใซ็ญใๅฉ็จใใใใจใใงใใใใชใใใฆใผใถใผใใใใใฎๆกไปถใฎใใใใใๆบใใใชใใชใฃใๅ ดๅใฏใใฆใผใถใผใฏใใฎๆ็นใงๆฌใขใใซ็ญใๅๆฅญ็ฎ็ใงๅฉ็จใใใใจใฏใงใใใๅๆฅญ็ฎ็ใงๆฌใขใใซ็ญใๅฉ็จใใใๅ ดๅใฏใๆฐใใซPFNใใๅๆฅญ็จใฎใฉใคใปใณในใๅๅพใใชใใใฐใชใใชใใ\n\
\n (i) PFNใฎๅ
ฌๅผ็ป้ฒใใผใธ https://forms.gle/mTL8tBLrMYXKNZD56 ใซไบๅใซ็ป้ฒใใใใจใ\n\n (ii) ใฆใผใถใผๅใฏใใฎ้ขไฟไผ็คพใฎ็ด่ฟไบๆฅญๅนดๅบฆใฎๅๅ
ฅๅใฏๅฃฒไธใ10ๅๅ๏ผใฆใผใถใผใฎ็พๅฐ้่ฒจๆ็ฎ้ก๏ผใ่ถ
ใใชใใใจใ\n\
\n#### ็ฌฌ4ๆก๏ผๅ้
ๅธๅใณ่กจ็คบ็พฉๅ๏ผ\n(1) ใฆใผใถใผใๆฌใขใใซ็ญ๏ผๆดพ็ใขใใซใใใฎ็ๆ็ฉใๅซใ๏ผใ็ฌฌไธ่
ใซๆไพใใๅ ดๅใไปฅไธใฎๆกไปถใๆบใใใชใใใฐใชใใชใใ\n\
\n (i) ๆฌๅฅ็ดใฎใณใใผใๆไพใใๆฌๅฅ็ดใฎๆกไปถใ้ตๅฎใใใใใจใ\n\n (ii) ใBuilt with PLaMoใใจๆ็คบใใ้ข้ฃใฆใงใใตใคใใใฆใผใถใผใคใณใฟใผใใงใผในใใใญใฐ่จไบใ่ฃฝๅๆ
ๅ ฑใใผใธๅใฏ่ฃฝๅใใญใฅใกใณใใซ่จ่ผใใใใจใ\n\
\n (iii) ๆฌใขใใซ็ญใๅฉ็จใใฆไฝๆใใ AI ใขใใซใฎๅ็งฐใซใPLaMoใใๅซใใใใจใ\n\n#### ็ฌฌ5ๆก๏ผ็ๆ็ฉใฎๅฉ็จ๏ผ\n(1) ใฆใผใถใผใฏใ็ๆ็ฉใๆฌใขใใซๅใฏๆดพ็ใขใใซใฎ็ๆ็ฉใงใใใใจใๆ็คบใใใใจใๆกไปถใซใๅ
ฌ่กจใใใใจใใงใใใ\n\
(2) ็ๆ็ฉใๅฉ็จใใฆใขใใซใๅญฆ็ฟใใๅ ดๅใใใฎใขใใซใฏๆดพ็ใขใใซใจใใฆๆฌๅฅ็ดใฎๆกไปถใ้ฉ็จใใใๆฌๅฅ็ดใฎใฉใคใปใณในๆกไปถใฎไธใงใฎใฟๅฉ็จใ้
ๅธๅใณๅๆฅญๅใใใใจใใงใใใ\n\
#### ็ฌฌ6ๆก๏ผใใฎไปๅฉ็จๆกไปถ๏ผ\nใฆใผใถใผใฏใๆฌใขใใซ็ญใฎๅฉ็จใซ้ขใใฆใไปฅไธใซๅฎใใ่ก็บใใใฆใฏใชใใชใใ\n(1) ๆณไปคๅใฏๅ
ฌๅบ่ฏไฟใซ้ๅใใ่ก็บ\n(2)\
\ ็ฏ็ฝช่ก็บๅใฏใใใไบๅใ้ขไธใๅฉ้ทใใฎไปใใใใซ้ข้ฃใใ่ก็บ\n(3) PFNๅใฏ็ฌฌไธ่
ใฎๆจฉๅฉๅใฏๅฉ็ใไพตๅฎณใใ่ก็บ\n(4) PFNๅใฏ็ฌฌไธ่
ใฎๅ่ช่ฅใใใฏไฟก็จใๆฏๆใใ่ก็บ\n\
(5) ็ๆ็ฉใPFNใฎๅ
ฌๅผ่ฆ่งฃ็ญใงใใใใฎใจใใ้ฏ่ชคใ็ใๆ
ๅ ฑใๆตๅธใใ่ก็บ\n(6) ่ๅฝใฎๆ
ๅ ฑใ็บไฟกใใ่ก็บ\n(7) ไธ่จใฎไปใPFNใไธ้ฉๅใจๅ็็ใซๅคๆญใใ่ก็บ\n\
#### ็ฌฌ7ๆก๏ผไฟ่จผใฎๅฆ่ช๏ผ\n(1) ๆฌใขใใซๅใณ็ๆ็ฉใฏใใ็พ็ถๆๅงฟใใงๆไพใใใPFNใฏใใใใใซๅฏพใใฆใๆญฃ็ขบๆงใ็ๅฎๆงใๅๅๆงใๅ่ณชใๆง่ฝใ็นๅฎ็ฎ็ใธใฎ้ฉๅๆงใๆจฉๅฉใฎ้ไพตๅฎณใชใฉไธๅใฎไฟ่จผใใใชใใ\n\
(2) ใฆใผใถใผใฏใๆณๅพใๅป็ใ้่ๅใฏไบบ็ฉ่ฉไพกใใฎไป้่ฆใชไบ้
ใฎๆฑบๅฎใซ้ขใใฆใ็ๆ็ฉใๅฏไธใฎ่จผๆ ใ่ฉไพกๅใฏๆ่ฆใจใใฆไฝฟ็จใใฆใฏใชใใชใใ\n(3) ใฆใผใถใผใฏใๆฌใขใใซ็ญใฎไฝฟ็จๅใณใใฎ็ตๆใซ้ขใใฆๅ
จใฆใฎ่ฒฌไปปใ่ฒ ใใ\n\
#### ็ฌฌ8ๆก๏ผ่ฒฌไปปใฎๅถ้๏ผ\n(1) ๅฅ็ด่ฒฌไปปใไธๆณ่ก็บๅใฏ่ฃฝ้ ็ฉ่ฒฌไปปใใฎไปใฎๆณ็่ฒฌไปปใฎใใใใใงใใใใๅใใใPFNใๆฌๅฅ็ดๅใณๆฌใขใใซ็ญใซ้ขใใฆใฆใผใถใผใซๅฏพใใฆ่ฒ ใๆๅฎณ่ณ ๅใฎ่ฒฌไปปใฏใ้ๅธธใใค็ดๆฅใฎๆๅฎณใซ้ใ๏ผ้ธๅคฑๅฉ็ใ็นๅฅๆๅฎณใ้ๆฅๆๅฎณใใฎไปใฎๆๅฎณใซใคใใฆใฏใใใฎไบ่ฆๅฏ่ฝๆงใฎๆ็กใซ้ขใใใใ่ฒฌไปปใ่ฒ ใใชใใ๏ผใๆๅฎณ่ณ ๅ้กใฎไธ้ใฏใ500ๅใจใใใไฝใใPFNใซๆ
ๆๅใฏ้้ๅคฑใ่ชใใใใๅ ดๅใฏใใฎ้ใใงใฏใชใใ\n\
(2) ๅ้
ใซ้ขใใใใใฆใผใถใผใๆฌใขใใซ็ญใไบๆฅญใฎใใใซๅฉ็จใใๅ ดๅใฏใPFNใฏๆฌๅฅ็ดๅใณๆฌใขใใซ็ญใซ้ขใใฆใฆใผใถใผใซๅฏพใใฆไธๅใฎๆๅฎณ่ณ ๅ่ฒฌไปปๅใณใใฎไปใฎ่ฒฌไปปใ่ฒ ใใชใใ\n\
#### ็ฌฌ9ๆก๏ผใฆใผใถใผใฎ่ฒฌไปป๏ผ\n(1) ใฆใผใถใผใฏใๆฌใขใใซ็ญใฎๅๅพๅใณๅฉ็จใซ้ขใใฆใ้ฉ็จใใใๆณไปค๏ผ่ผธๅบๅ
ฅๅใณ่ฒฟๆใซ้ข้ฃใใๆณไปคใๅซใใ๏ผๅใณๆฌๅฅ็ดใ้ตๅฎใใใ\n\
(2) ใฆใผใถใผใฏใๆฌๅฅ็ด้ๅๅใฏๆฌใขใใซ็ญใฎไฝฟ็จใซใใฃใฆใPFNใซๆๅฎณใไธใใๅ ดๅใฏใใใฎๆๅฎณใ่ณ ๅใใใ\n(3) ใฆใผใถใผใฎๆฌใขใใซ็ญใฎไฝฟ็จใซ่ตทๅ ใใฆใPFNใ็ฌฌไธ่
ใใๆๅฎณ่ณ ๅ่ซๆฑใใฎไป่ซๆฑใๅใใๅ ดๅใใฆใผใถใผใฏใๅฝ่ฉฒ่ซๆฑใใPFNใๅ
่ฒฌใใPFNใซๆๅฎณใไธใใชใใใใซใใใ\n\
#### ็ฌฌ10ๆก๏ผๆจฉๅฉใฎๅธฐๅฑ๏ผ\n(1) ๆฌใขใใซใฎไธๅใฎๆจฉๅฉใฏใPFNๅใฏPFNใซๆฌใขใใซใฎใฉใคใปใณในใใใฆใใ็ฌฌไธ่
ใซๅธฐๅฑใใใ\n(2) ๆดพ็ใขใใซใฎใใกใใฆใผใถใผใๆฌใขใใซใๆนๅคใใ้จๅใฎๆจฉๅฉใฏใฆใผใถใผใซๅธฐๅฑใใใใฎไปใฎ้จๅใฎๆจฉๅฉใฏPFNใซๅธฐๅฑใใใ\n\
(3) ็ๆ็ฉใฎไธๅใฎๆจฉๅฉใฏใฆใผใถใผใซๅธฐๅฑใใใ\n#### ็ฌฌ11ๆก๏ผๅฅ็ดๆ้ๅใณ็ตไบ๏ผ\n(1) ๆฌๅฅ็ดใฏใใฆใผใถใผใๆฌๅฅ็ดใซๅๆใใใจใๅใฏๆฌใขใใซใซใขใฏใปในใใใจใใใใๆฌๅฅ็ดใ่งฃ็ดใใใใจใใพใงใจใใใ\n\
(2) ใฆใผใถใผใๆฌๅฅ็ดใฎใใใใใฎๆก้
ใซ้ๅใใๅ ดๅใPFNใฏ็ดใกใซๆฌๅฅ็ดใ่งฃ้คใใใใจใใงใใใฆใผใถใผใฏๆฌใขใใซ็ญใฎใในใฆใฎใณใใผใๅ้คใใๅฉ็จใๅณๆใซๅๆญขใใชใใใฐใชใใชใใ\n\
#### ็ฌฌ12ๆก๏ผๅฅ็ดใฎๅคๆด๏ผ\nPFNใฏใๆฌๅฅ็ด๏ผๆฌใขใใซ็ญใซ้ขใใใซใผใซใ่ซธ่ฆๅฎ็ญใๅซใใไปฅไธๆฌๆกใซใใใฆๅใใ๏ผใๅคๆดใงใใใใฎใจใใใPFNใฏใๆฌๅฅ็ดใๅคๆดใใๅ ดๅใซใฏใๅคๆดใฎๅ
ๅฎนๅใณๅคๆดใฎๅนๅ็บ็ๆๆใใๅฝ่ฉฒๅนๅ็บ็ๆๆใพใงใซPFNๆๅฎใฎๆนๆณใงๅ็ฅใใใใฎใจใใใ\n\
#### ็ฌฌ13ๆก๏ผๆบๆ ๆณๅใณ็ฎก่ฝ่ฃๅคๆ๏ผ\n(1) ๆฌๅฅ็ดใฎๆบๆ ๆณใฏๆฅๆฌๆณใจใใใ\n(2) ๆฌใขใใซ็ญๅใณๆฌๅฅ็ดใซ่ตทๅ ใใ็ดไบใซใคใใฆใฏใๆฑไบฌๅฐๆน่ฃๅคๆใๅฐๅฑ็ๅๆ็ฎก่ฝ่ฃๅคๆใจใใใ"
base_model: pfnet/plamo-2-8b
tags:
- plamo
- translation
---
# PLaMo Translation Model
**This is a 4-bit quantized version of the PLaMo 2 Translation Model with DWQ (Distilled Weight Quantization) for inference with MLX on Apple Silicon devices.**
PLaMo็ฟป่จณใขใใซใฏPreferred Networksใซใใฃใฆ้็บใใใ็ฟป่จณๅใ็นๅๅๅคง่ฆๆจก่จ่ชใขใใซใงใใ
่ฉณใใใฏ[ใใญใฐ่จไบ](https://tech.preferred.jp/ja/blog/plamo-translate/)ใใใณ[ใใฌในใชใชใผใน](https://www.preferred.jp/ja/news/pr20250527/)ใๅ็
งใใฆใใ ใใใ
PLaMo Translation Model is a specialized large-scale language model developed by Preferred Networks for translation tasks.
For details, please refer to the [blog post](https://tech.preferred.jp/ja/blog/plamo-translate/) and [press release](https://www.preferred.jp/ja/news/pr20250527/).
List of models:
- [plamo-2-translate](http://huggingface.co/pfnet/plamo-2-translate) ... Post-trained model for translation
- [plamo-2-translate-base](http://huggingface.co/pfnet/plamo-2-translate-base) ... Base model for translation
- [plamo-2-translate-eval](http://huggingface.co/pfnet/plamo-2-translate-eval) ... Pair-wise evaluation model
PLaMo Translation Model is released under PLaMo community license. Please check the following license and agree to this before downloading.
- (EN) under construction: we apologize for the inconvenience
- (JA) https://www.preferred.jp/ja/plamo-community-license/
**NOTE**: This model has **NOT** been instruction-tuned for chat dialog or other downstream tasks.
### For *commercial* users
Please check the PLaMo community license and contact us via the following form to use commercial purpose.
- (EN/JA) https://forms.gle/mTL8tBLrMYXKNZD56
## Usage
```sh
$ pip install mlx-lm numba
$ python -m mlx_lm generate \
--model mlx-community/plamo-2-translate \
--extra-eos-token '<|plamo:op|>' \
--prompt 'ใใฎใคใผใใใผใดใฉใฎใใใจใใฃใ้ขจใๅคใงใๅบใซๅทใใใใใค้ใใใใใใคใใใๆฃฎใง้ฃพใใใใขใชใผใชๅธใ้ๅคใฎใใใใใฒใใ่ใฎๆณขใ'
=========
That clear wind blowing through Ihatovo, that summer sky with its cool depth beneath, that beautiful forest-adorned Morio City, and the glittering waves of grass in the suburbs.
==========
Prompt: 60 tokens, 107.934 tokens-per-sec
Generation: 36 tokens, 39.118 tokens-per-sec
Peak memory: 5.653 GB
```
## Bias, Risks, and Limitations
PLaMo Translation Model is a new technology that carries risks with use. Testing conducted to date has been in English and Japanese, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, PLaMo Translation Modelโs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of PLaMo Translation Model, developers should perform safety testing and tuning tailored to their specific applications of the model.
## Acknowledgement
This model is trained under the project, โResearch and Development Project of the Enhanced Infrastructures for Post 5G Information and Communication Systemโ (JPNP 20017), subsidized by the New Energy and Industrial Technology Development Organization (NEDO).
## AI policies for Preferred Networks, Inc. group
- (EN) https://www.preferred.jp/en/company/aipolicy/
- (JA) https://www.preferred.jp/ja/company/aipolicy/
|
bamec66557/Qwen3-14B-Del | bamec66557 | 2025-05-29T06:06:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:Qwen/Qwen3-14B",
"base_model:merge:Qwen/Qwen3-14B",
"base_model:bamec66557/Qwen3-14B-ADD",
"base_model:merge:bamec66557/Qwen3-14B-ADD",
"base_model:unsloth/Qwen3-14B",
"base_model:merge:unsloth/Qwen3-14B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T05:48:56Z | ---
base_model:
- unsloth/Qwen3-14B
- Qwen/Qwen3-14B
- bamec66557/Qwen3-14B-ADD
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [unsloth/Qwen3-14B](https://huggingface.co/unsloth/Qwen3-14B) as a base.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B)
* [bamec66557/Qwen3-14B-ADD](https://huggingface.co/bamec66557/Qwen3-14B-ADD)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: unsloth/Qwen3-14B
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: Qwen/Qwen3-14B
parameters:
density: 0.5
weight: [0, 0.3, 0.7, 1] # weight gradient
- model: bamec66557/Qwen3-14B-ADD
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: ties
base_model: unsloth/Qwen3-14B
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
```
|
CS291A/qwen-0.5b__433-enriched_claude3-7__file_level__complex | CS291A | 2025-05-29T05:59:46Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"code",
"code-generation",
"instruction-tuning",
"en",
"base_model:Qwen/Qwen2.5-Coder-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-0.5B-Instruct",
"license:other",
"region:us"
] | null | 2025-05-29T05:59:03Z | ---
language: en
license: other
base_model: Qwen/Qwen2.5-Coder-0.5B-Instruct
tags:
- code
- code-generation
- instruction-tuning
---
# Model Description
This model is fine-tuned from Qwen/Qwen2.5-Coder-0.5B-Instruct for code generation tasks.
|
Saiko478/chava_model | Saiko478 | 2025-05-29T05:46:58Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T05:36:09Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Saiko478
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cotran2/qwen-1000 | cotran2 | 2025-05-29T05:45:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T05:44:22Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
harshc/my_3rd_model_token_classification-distilbert | harshc | 2025-05-29T05:45:06Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-05-29T05:42:09Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_3rd_model_token_classification-distilbert
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: test
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.46987951807228917
- name: Recall
type: recall
value: 0.2891566265060241
- name: F1
type: f1
value: 0.3580034423407917
- name: Accuracy
type: accuracy
value: 0.9399769141977684
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_3rd_model_token_classification-distilbert
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2733
- Precision: 0.4699
- Recall: 0.2892
- F1: 0.3580
- Accuracy: 0.9400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.2933 | 0.3989 | 0.1956 | 0.2624 | 0.9355 |
| No log | 2.0 | 426 | 0.2733 | 0.4699 | 0.2892 | 0.3580 | 0.9400 |
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
2yunadaaa/gemma-3-4b-it-3kingdoms-instruct-16bit | 2yunadaaa | 2025-05-29T05:34:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"gemma3",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T05:24:56Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** 2yunadaaa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dfdbgrd/hh | dfdbgrd | 2025-05-29T05:33:08Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-29T05:33:08Z | ---
license: creativeml-openrail-m
---
|
friendshipkim/Qwen2.5-14B-Instruct-pruned-h1664-i6656-a0.0-d0.0 | friendshipkim | 2025-05-29T05:20:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T05:18:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hygul/yi-ko-6b-text2sql | hygul | 2025-05-29T05:14:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T05:10:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jozhang97/ism_t33_650M_uc30pdb_L1022 | jozhang97 | 2025-05-29T05:09:12Z | 0 | 0 | null | [
"safetensors",
"esm",
"region:us"
] | null | 2025-05-29T05:07:13Z | # ISM
By [Jeffrey Ouyang-Zhang](https://jozhang97.github.io/), [Chengyue Gong](https://sites.google.com/view/chengyue-gong), [Yue Zhao](https://zhaoyue-zephyrus.github.io), [Philipp Krรคhenbรผhl](http://www.philkr.net/), [Adam Klivans](https://www.cs.utexas.edu/users/klivans/), [Daniel J. Diaz](http://danny305.github.io)
This repository contains the model presented in the paper [Distilling Structural Representations into Protein Sequence Models](https://www.biorxiv.org/content/10.1101/2024.11.08.622579v1).
The official github can be found at https://github.com/jozhang97/ism.
**TL; DR.** ESM2 with enriched structural representations
## Quickstart
This quickstart assumes that the user is already working with ESM2 and is interested in replacing ESM with ISM. First, download ISM.
```bash
# recommended
huggingface-cli download jozhang97/ism_t33_650M_uc30pdb_L1022 --local-dir /path/to/save/ism
# alternative
git clone https://huggingface.co/jozhang97/ism_t33_650M_uc30pdb_L1022
```
If the user is starting from [fair-esm](https://github.com/facebookresearch/esm), add the following lines of code.
```python
import esm
model, alphabet = esm.pretrained.esm2_t33_650M_UR50D()
ckpt = torch.load('/path/to/ism_t33_650M_uc30pdb_L1022/checkpoint.pth')
model.load_state_dict(ckpt)
```
If the user is starting from [huggingface](https://huggingface.co/facebook/esm2_t33_650M_UR50D), replace the model and tokenizer with the following line of code.
```python
from transformers import AutoTokenizer, AutoModel
config_path = "/path/to/ism_t33_650M_uc30pdb_L1022/"
model = AutoModel.from_pretrained(config_path)
tokenizer = AutoTokenizer.from_pretrained(config_path)
```
Please change `/path/to/ism_t33_650M_uc30pdb_L1022` to the path where the model is downloaded.
## Citing ISM
If you find ISM useful in your research, please consider citing:
```bibtex
@article{ouyangzhang2024distilling,
title={Distilling Structural Representations into Protein Sequence Models},
author={Ouyang-Zhang, Jeffrey and Gong, Chengyue and Zhao, Yue and Kr{\"a}henb{\"u}hl, Philipp and Klivans, Adam and Diaz, Daniel J},
journal={bioRxiv},
doi={10.1101/2024.11.08.622579},
year={2024},
publisher={Cold Spring Harbor Laboratory}
}
```
|
Cloudmaster/Llama-3.2-3B-torchao-W8-g32 | Cloudmaster | 2025-05-29T04:59:00Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"torchao",
"region:us"
] | text-generation | 2025-05-29T04:54:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yamatazen/Shisa-DellaTest-12B | yamatazen | 2025-05-29T04:57:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2406.11617",
"base_model:Elizezen/Himeyuri-v0.1-12B",
"base_model:merge:Elizezen/Himeyuri-v0.1-12B",
"base_model:TheDrummer/Rocinante-12B-v1.1",
"base_model:merge:TheDrummer/Rocinante-12B-v1.1",
"base_model:shisa-ai/shisa-v2-mistral-nemo-12b",
"base_model:merge:shisa-ai/shisa-v2-mistral-nemo-12b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T04:21:30Z | ---
base_model:
- Elizezen/Himeyuri-v0.1-12B
- TheDrummer/Rocinante-12B-v1.1
- shisa-ai/shisa-v2-mistral-nemo-12b
library_name: transformers
tags:
- mergekit
- merge
---
# Shisa-DellaTest-12B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using [shisa-ai/shisa-v2-mistral-nemo-12b](https://huggingface.co/shisa-ai/shisa-v2-mistral-nemo-12b) as a base.
### Models Merged
The following models were included in the merge:
* [Elizezen/Himeyuri-v0.1-12B](https://huggingface.co/Elizezen/Himeyuri-v0.1-12B)
* [TheDrummer/Rocinante-12B-v1.1](https://huggingface.co/TheDrummer/Rocinante-12B-v1.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: della
dtype: bfloat16
out_dtype: bfloat16
base_model: shisa-ai/shisa-v2-mistral-nemo-12b
models:
- model: Elizezen/Himeyuri-v0.1-12B
parameters:
density: 0.75
weight: 0.8
- model: TheDrummer/Rocinante-12B-v1.1
parameters:
density: 0.6
weight: 0.6
parameters:
epsilon: 0.1
lambda: 1.0
```
|
New-tutorial-Meg-Mckeon-Viral-Video/FULL.VIDEO.LINK.Meg.Mckeon.Viral.Video.Leaks.Official | New-tutorial-Meg-Mckeon-Viral-Video | 2025-05-29T04:30:53Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-29T04:29:55Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
BootesVoid/cmb8tzhrc0r31lexpgqc7alnu_cmb8u0myh0r3xlexp8uuom4iz | BootesVoid | 2025-05-29T04:22:42Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-29T04:22:40Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SEXY
---
# Cmb8Tzhrc0R31Lexpgqc7Alnu_Cmb8U0Myh0R3Xlexp8Uuom4Iz
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SEXY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SEXY",
"lora_weights": "https://huggingface.co/BootesVoid/cmb8tzhrc0r31lexpgqc7alnu_cmb8u0myh0r3xlexp8uuom4iz/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb8tzhrc0r31lexpgqc7alnu_cmb8u0myh0r3xlexp8uuom4iz', weight_name='lora.safetensors')
image = pipeline('SEXY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb8tzhrc0r31lexpgqc7alnu_cmb8u0myh0r3xlexp8uuom4iz/discussions) to add images that show off what youโve made with this LoRA.
|
rtl-llm/qwen2.5coder-7b-translate-pymtl-truncate | rtl-llm | 2025-05-29T04:22:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T04:18:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
leo35100523/Llama-3.2-1B-distill | leo35100523 | 2025-05-29T04:11:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"arxiv:2204.05149",
"arxiv:2405.16406",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T04:10:13Z | ---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
license: llama3.2
extra_gated_prompt: >-
### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT
Llama 3.2 Version Release Date: September 25, 2024
โAgreementโ means the terms and conditions for use, reproduction, distribution
and modification of the Llama Materials set forth herein.
โDocumentationโ means the specifications, manuals and documentation accompanying Llama 3.2
distributed by Meta at https://llama.meta.com/doc/overview.
โLicenseeโ or โyouโ means you, or your employer or any other person or entity (if you are
entering into this Agreement on such person or entityโs behalf), of the age required under
applicable laws, rules or regulations to provide legal consent and that has legal authority
to bind your employer or such other person or entity if you are entering in this Agreement
on their behalf.
โLlama 3.2โ means the foundational large language models and software and algorithms, including
machine-learning model code, trained model weights, inference-enabling code, training-enabling code,
fine-tuning enabling code and other elements of the foregoing distributed by Meta at
https://www.llama.com/llama-downloads.
โLlama Materialsโ means, collectively, Metaโs proprietary Llama 3.2 and Documentation (and
any portion thereof) made available under this Agreement.
โMetaโ or โweโ means Meta Platforms Ireland Limited (if you are located in or,
if you are an entity, your principal place of business is in the EEA or Switzerland)
and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).
By clicking โI Acceptโ below or by using or distributing any portion or element of the Llama Materials,
you agree to be bound by this Agreement.
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide,
non-transferable and royalty-free limited license under Metaโs intellectual property or other rights
owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works
of, and make modifications to the Llama Materials.
b. Redistribution and Use.
i. If you distribute or make available the Llama Materials (or any derivative works thereof),
or a product or service (including another AI model) that contains any of them, you shall (A) provide
a copy of this Agreement with any such Llama Materials; and (B) prominently display โBuilt with Llamaโ
on a related website, user interface, blogpost, about page, or product documentation. If you use the
Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or
otherwise improve an AI model, which is distributed or made available, you shall also include โLlamaโ
at the beginning of any such AI model name.
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part
of an integrated end user product, then Section 2 of this Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute the
following attribution notice within a โNoticeโ text file distributed as a part of such copies:
โLlama 3.2 is licensed under the Llama 3.2 Community License, Copyright ยฉ Meta Platforms,
Inc. All Rights Reserved.โ
iv. Your use of the Llama Materials must comply with applicable laws and regulations
(including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for
the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby
incorporated by reference into this Agreement.
2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users
of the products or services made available by or for Licensee, or Licenseeโs affiliates,
is greater than 700 million monthly active users in the preceding calendar month, you must request
a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to
exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND
RESULTS THEREFROM ARE PROVIDED ON AN โAS ISโ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS
ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES
OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE
FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED
WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT,
FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN
IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials,
neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates,
except as required for reasonable and customary use in describing and redistributing the Llama Materials or as
set forth in this Section 5(a). Meta hereby grants you a license to use โLlamaโ (the โMarkโ) solely as required
to comply with the last sentence of Section 1.b.i. You will comply with Metaโs brand guidelines (currently accessible
at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark
will inure to the benefit of Meta.
b. Subject to Metaโs ownership of Llama Materials and derivatives made by or for Meta, with respect to any
derivative works and modifications of the Llama Materials that are made by you, as between you and Meta,
you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or
counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion
of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable
by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or
claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third
party arising out of or related to your use or distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access
to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms
and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this
Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3,
4 and 7 shall survive the termination of this Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of
California without regard to choice of law principles, and the UN Convention on Contracts for the International
Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of
any dispute arising out of this Agreement.
### Llama 3.2 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2.
If you access or use Llama 3.2, you agree to this Acceptable Use Policy (โ**Policy**โ).
The most recent copy of this policy can be found at
[https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).
#### Prohibited Uses
We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to:
1. Violate the law or othersโ rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individualsโ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law
5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Metaย
2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following:
8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997
9. Guns and illegal weapons (including weapon development)
10. Illegal drugs and regulated/controlled substances
11. Operation of critical infrastructure, transportation technologies, or heavy machinery
12. Self-harm or harm to others, including suicide, cutting, and eating disorders
13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following:
14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
16. Generating, promoting, or further distributing spam
17. Impersonating another individual without consent, authorization, or legal right
18. Representing that the use of Llama 3.2 or outputs are human-generated
19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagementย
4. Fail to appropriately disclose to end users any known dangers of your AI system
5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2
With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models.
Please report any violation of this Policy, software โbug,โ or other problems that could lead to a violation of this Policy through one of the following means:
* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected]
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: >-
The information you provide will be collected, stored, processed and shared in
accordance with the [Meta Privacy
Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
## Model Information
The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model Developer:** Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
| | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
| Llama 3.2 Quantized (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 8k | Yes | Yes | Up to 9T tokens | December 2023 |
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
**Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
**Feedback:** Instructions on how to provide feedback or comments on the model can be found in the Llama Models [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources.
**Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.
## How to use
This repository contains two versions of Llama-3.2-1B-Instruct, for use with transformers and with the original `llama` codebase.
### Use with transformers
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import torch
from transformers import pipeline
model_id = "meta-llama/Llama-3.2-1B-Instruct"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
### Use with `llama`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Llama-3.2-1B-Instruct --include "original/*" --local-dir Llama-3.2-1B-Instruct
```
## Hardware and Software
**Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure.
**Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
**Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
| | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) |
| :---- | :---: | ----- | :---: | :---: | :---: |
| Llama 3.2 1B | 370k | \- | 700 | 107 | 0 |
| Llama 3.2 3B | 460k | \- | 700 | 133 | 0 |
| Llama 3.2 1B SpinQuant | 1.7 | 0 | 700 | *Negligible*\*\* | 0 |
| Llama 3.2 3B SpinQuant | 2.4 | 0 | 700 | *Negligible*\*\* | 0 |
| Llama 3.2 1B QLora | 1.3k | 0 | 700 | 0.381 | 0 |
| Llama 3.2 3B QLora | 1.6k | 0 | 700 | 0.461 | 0 |
| Total | 833k | 86k | | 240 | 0 |
\*\* The location-based CO2e emissions of Llama 3.2 1B SpinQuant and Llama 3.2 3B SpinQuant are less than 0.001 metric tonnes each. This is due to the minimal training GPU hours that are required.
The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
## Training Data
**Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO).
**Data Freshness:** The pretraining data has a cutoff of December 2023\.
## Quantization
### Quantization Scheme
We designed the current quantization scheme with the [PyTorchโs ExecuTorch](https://github.com/pytorch/executorch) inference framework and Arm CPU backend in mind, taking into account metrics including model quality, prefill/decoding speed, and memory footprint. Our quantization scheme involves three parts:
- All linear layers in all transformer blocks are quantized to a 4-bit groupwise scheme (with a group size of 32) for weights and 8-bit per-token dynamic quantization for activations.
- The classification layer is quantized to 8-bit per-channel for weight and 8-bit per token dynamic quantization for activation.
- Similar to classification layer, an 8-bit per channel quantization is used for embedding layer.
### Quantization-Aware Training and LoRA
The quantization-aware training (QAT) with low-rank adaptation (LoRA) models went through only post-training stages, using the same data as the full precision models. To initialize QAT, we utilize BF16 Llama 3.2 model checkpoints obtained after supervised fine-tuning (SFT) and perform an additional full round of SFT training with QAT. We then freeze the backbone of the QAT model and perform another round of SFT with LoRA adaptors applied to all layers within the transformer block. Meanwhile, the LoRA adaptors' weights and activations are maintained in BF16. Because our approach is similar to QLoRA of Dettmers et al., (2023) (i.e., quantization followed by LoRA adapters), we refer this method as QLoRA. Finally, we fine-tune the resulting model (both backbone and LoRA adaptors) using direct preference optimization (DPO).
### SpinQuant
[SpinQuant](https://arxiv.org/abs/2405.16406) was applied, together with generative post-training quantization (GPTQ). For the SpinQuant rotation matrix fine-tuning, we optimized for 100 iterations, using 800 samples with sequence-length 2048 from the WikiText 2 dataset. For GPTQ, we used 128 samples from the same dataset with the same sequence-length.
## Benchmarks \- English Text
In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library.
### Base Pretrained Models
| Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
| ----- | ----- | :---: | :---: | :---: | :---: | :---: |
| General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 |
| | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 |
| | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 |
| Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 |
| | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 |
| | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 |
| Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 |
### Instruction Tuned Models
| Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B bf16 | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B bf16 | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B |
| :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| General | | MMLU | 5 | macro\_avg/acc | 49.3 | 43.3 | 47.3 | 49.0 | 63.4 | 60.5 | 62 | 62.4 | 69.4 |
| Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 39.2 | 40.9 | 41.2 | 40.1 | 40.3 | 40.8 | 40.7 | 40.9 |
| Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 14.9 | 16.7 | 16.8 | 19.0 | 19.1 | 19.2 | 19.1 | 17.2 |
| Instruction following | | IFEval | 0 | Avg(Prompt/Instruction acc Loose/Strict) | 59.5 | 51.5 | 58.4 | 55.6 | 77.4 | 73.9 | 73.5 | 75.9 | 80.4 |
| Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 33.1 | 40.6 | 46.5 | 77.7 | 72.9 | 75.7 | 77.9 | 84.5 |
| | | MATH (CoT) | 0 | final\_em | 30.6 | 20.5 | 25.3 | 31.0 | 48.0 | 44.2 | 45.3 | 49.2 | 51.9 |
| Reasoning | | ARC-C | 0 | acc | 59.4 | 54.3 | 57 | 60.7 | 78.6 | 75.6 | 77.6 | 77.6 | 83.4 |
| | | GPQA | 0 | acc | 27.2 | 25.9 | 26.3 | 25.9 | 32.8 | 32.8 | 31.7 | 33.9 | 32.8 |
| | | Hellaswag | 0 | acc | 41.2 | 38.1 | 41.3 | 41.5 | 69.8 | 66.3 | 68 | 66.3 | 78.7 |
| Tool Use | | BFCL V2 | 0 | acc | 25.7 | 14.3 | 15.9 | 23.7 | 67.0 | 53.4 | 60.1 | 63.5 | 67.1 |
| | | Nexus | 0 | macro\_avg/acc | 13.5 | 5.2 | 9.6 | 12.5 | 34.3 | 32.4 | 31.5 | 30.1 | 38.5 |
| Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | N/A | N/A | N/A | 19.8 | N/A | N/A | N/A | 27.3 |
| | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | N/A | N/A | N/A | 63.3 | N/A | N/A | N/A | 72.2 |
| | | NIH/Multi-needle | 0 | recall | 75.0 | N/A | N/A | N/A | 84.7 | N/A | N/A | N/A | 98.8 |
| Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 13.7 | 18.2 | 24.4 | 58.2 | 48.9 | 54.3 | 56.8 | 68.9 |
\*\*for comparison purposes only. Model not released.
### Multilingual Benchmarks
| Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| General | MMLU (5-shot, macro_avg/acc) | Portuguese | 39.8 | 34.9 | 38.9 | 40.2 | 54.5 | 50.9 | 53.3 | 53.4 | 62.1 |
| | | Spanish | 41.5 | 36.0 | 39.8 | 41.8 | 55.1 | 51.9 | 53.6 | 53.6 | 62.5 |
| | | Italian | 39.8 | 34.9 | 38.1 | 40.6 | 53.8 | 49.9 | 52.1 | 51.7 | 61.6 |
| | | German | 39.2 | 34.9 | 37.5 | 39.6 | 53.3 | 50.0 | 52.2 | 51.3 | 60.6 |
| | | French | 40.5 | 34.8 | 39.2 | 40.8 | 54.6 | 51.2 | 53.3 | 53.3 | 62.3 |
| | | Hindi | 33.5 | 30.0 | 32.1 | 34.0 | 43.3 | 40.4 | 42.0 | 42.1 | 50.9 |
| | | Thai | 34.7 | 31.2 | 32.4 | 34.9 | 44.5 | 41.3 | 44.0 | 42.2 | 50.3 |
\*\*for comparison purposes only. Model not released.
## Inference time
In the below table, we compare the performance metrics of different quantization methods (SpinQuant and QAT \+ LoRA) with the BF16 baseline. The evaluation was done using the [ExecuTorch](https://github.com/pytorch/executorch) framework as the inference engine, with the ARM CPU as a backend using Android OnePlus 12 device.
| Category | Decode (tokens/sec) | Time-to-first-token (sec) | Prefill (tokens/sec) | Model size (PTE file size in MB) | Memory size (RSS in MB) |
| :---- | ----- | ----- | ----- | ----- | ----- |
| 1B BF16 (baseline) | 19.2 | 1.0 | 60.3 | 2358 | 3,185 |
| 1B SpinQuant | 50.2 (2.6x) | 0.3 (-76.9%) | 260.5 (4.3x) | 1083 (-54.1%) | 1,921 (-39.7%) |
| 1B QLoRA | 45.8 (2.4x) | 0.3 (-76.0%) | 252.0 (4.2x) | 1127 (-52.2%) | 2,255 (-29.2%) |
| 3B BF16 (baseline) | 7.6 | 3.0 | 21.2 | 6129 | 7,419 |
| 3B SpinQuant | 19.7 (2.6x) | 0.7 (-76.4%) | 89.7 (4.2x) | 2435 (-60.3%) | 3,726 (-49.8%) |
| 3B QLoRA | 18.5 (2.4x) | 0.7 (-76.1%) | 88.8 (4.2x) | 2529 (-58.7%) | 4,060 (-45.3%) |
(\*) The performance measurement is done using an adb binary-based approach.
(\*\*) It is measured on an Android OnePlus 12 device.
(\*\*\*) Time-to-first-token (TTFT) is measured with prompt length=64
*Footnote:*
- *Decode (tokens/second) is for how quickly it keeps generating. Higher is better.*
- *Time-to-first-token (TTFT for shorthand) is for how fast it generates the first token for a given prompt. Lower is better.*
- *Prefill is the inverse of TTFT (aka 1/TTFT) in tokens/second. Higher is better*
- *Model size \- how big is the model, measured by, PTE file, a binary file format for ExecuTorch*
- *RSS size \- Memory usage in resident set size (RSS)*
## Responsibility & Safety
As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama
2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm
3. Provide protections for the community to help prevent the misuse of our models
### Responsible Deployment
**Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Metaโs Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driverโs seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/).
#### Llama 3.2 Instruct
**Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/).
**Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. Weโve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
**Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
#### Llama 3.2 Systems
**Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
### New Capabilities and Use Cases
**Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well.
**Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version.
### Evaluations
**Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case.
**Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
### Critical Risks
In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas:
**1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models.
**2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
**3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2โs 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models.
### Community
**Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
**Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Metaโs Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
**Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
**Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
**Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
|
ubergarm/DeepSeek-R1-0528-GGUF | ubergarm | 2025-05-29T04:10:35Z | 0 | 0 | null | [
"mla",
"imatrix",
"conversational",
"ik_llama.cpp",
"text-generation",
"base_model:deepseek-ai/DeepSeek-R1-0528",
"base_model:quantized:deepseek-ai/DeepSeek-R1-0528",
"license:mit",
"region:us"
] | text-generation | 2025-05-29T03:40:22Z | ---
quantized_by: ubergarm
pipeline_tag: text-generation
base_model: deepseek-ai/DeepSeek-R1-0528
license: mit
base_model_relation: quantized
tags:
- mla
- imatrix
- conversational
- ik_llama.cpp
---
# WIP
Still cooking this imatrix... will upload at least one quant *soon*โข๏ธ ๐ค.
## `ik_llama.cpp` imatrix MLA Quantizations of DeepSeek-R1-0528
This quant collection **REQUIRES** [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp/) fork to support advanced non-linear SotA quants and Multi-Head Latent Attention (MLA). Do **not** download these big files and expect them to run on mainline vanilla llama.cpp, ollama, LM Studio, KoboldCpp, etc!
These quants provide best in class perplexity for the given memory footprint. MLA support allows 32k+ context length in under 24GB GPU VRAM for `R1` and `V3` while offloading MoE layers to RAM.
These quants are specifically designed for CPU+GPU systems with about ~24GB VRAM as well as also CPU *only* rigs using dynamic quant repacking (for maximum memory throughput). If you have more VRAM, I you can now load `_R4` row-interleave repacked quants onto GPUs as of [ik_llama.cpp PR462](https://github.com/ikawrakow/ik_llama.cpp/pull/462). So these quants are good for multi-GPU setups as well now!
You could try `ik_llama.cpp` quickly with your *existing* quants, as it computes MLA tensors and repacks quants on the fly at startup (if you have enough RAM+VRAM to fit entire model). Then come check out these fat quants here once you see the difference.
## Big Thanks
Shout out to Wendell and the **Level1Techs** crew, the community [Forums](https://forum.level1techs.com/t/deepseek-deep-dive-r1-at-home/225826), [YouTube Channel](https://www.youtube.com/@Level1Techs)! **BIG thanks** for providing **BIG hardware** expertise and access to run these experiments and make these great quants available to the community!!!
Also thanks to all the folks in the quanting and inferencing community here and on `r/LocalLLaMA` for tips and tricks helping each other run all the fun new models!
Excited to share and learn together. Thanks!
## Quant Collection
So far these are my best recipes offering the lowest perplexity per GiB models suitable for a wide variety of CPU+GPU or CPU *only* rigs.
#### `IQ3_K_R4` 4.??? BPW
Special mix `IQ4_KS_R4` `ffn_down` / `IQ3_K_R4` `ffn_(up|gate)` routed experts with all other layers full `q8_0` for CPU+GPU offload or `--run-time-repack` for max speed CPU *only* rigs.
## Quick Start
#### `ik_llama.cpp` API server for GPU+CPU
```bash
# Fits 32k context in under 24GB VRAM
# Optional `-ser 6,1` improves speed at minimal cost to quality
CUDA_VISIBLE_DEVICES="0," \
./build/bin/llama-server \
--model /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/DeepSeek-R1-0528-IQ3_K_R4.gguf \
--alias ubergarm/DeepSeek-R1-0528-IQ3_K_R4 \
--ctx-size 32768 \
-ctk q8_0 \
-mla 3 -fa \
-amb 512 \
-fmoe \
--n-gpu-layers 63 \
--override-tensor exps=CPU \
--parallel 1 \
--threads 16 \
--host 127.0.0.1 \
--port 8080
```
#### `ik_llama.cpp` API server for MultiGPU(+CPU)
```bash
# Adjust number of routed expert layers for additional VRAM on each GPU
# Compile with -DGGML_SCHED_MAX_COPIES=1 for multi-GPUs
# Compile with -DGGML_CUDA_IQK_FORCE_BF16=1 if putting `_R4` tensors on GPU (for DeepSeek only)
./build/bin/llama-server \
--model /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/DeepSeek-R1-0528-IQ3_K_R4.gguf \
--alias ubergarm/DeepSeek-R1-0528-IQ3_K_R4 \
--ctx-size 32768 \
-ctk q8_0 \
-mla 3 -fa \
-amb 512 \
-fmoe \
--n-gpu-layers 63 \
-ts 24,24 \
-ot "blk\.(3|4)\.ffn_.*=CUDA0" \
-ot "blk\.(5|6)\.ffn_.*=CUDA1" \
--override-tensor exps=CPU \
--parallel 1 \
--threads 16 \
--host 127.0.0.1 \
--port 8080
```
#### `ik_llama.cpp` API server for CPU *only*
```
# The goal for now is as much RAM bandwidth in a single NUMA node e.g.
# Use BIOS `NPS0` on AMD Epyc or single socket of Intel Xeon in BIOS `SNC=Disable` & Snoop Interleave
# Tune your `--threads` for token generation, and `--threads-batch` for prompt processing (prefill)
# Note `--run-time-repack` will pre-allocate enough RAM for model weights instead of mmap()'ing off disk
# Note there are options for both Explicit and Transparent Huge Pages with tuning discussions in [git repo](https://github.com/ikawrakow/ik_llama.cpp/pull/278#issuecomment-2746381515)
numactl -N 0 -m 0 \
./build/bin/llama-server \
--model /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/DeepSeek-R1-0528-IQ3_K_R4.gguf \
--alias ubergarm/DeepSeek-R1-0528-IQ3_K_R4 \
--run-time-repack \
--ctx-size 65536 \
-ctk q8_0 \
-mla 3 -fa \
-amb 512 \
-fmoe \
--parallel 1 \
--threads 88 \
--threads-batch 128 \
--numa numactl \
--host 127.0.0.1 \
--port 8080
```
## Quant Comparisons
Check out [The Great Quant Wars of
2025](https://www.reddit.com/r/LocalLLaMA/comments/1khwxal/the_great_quant_wars_of_2025/)
r/LocalLLaMA post for some more discussion on quantization and
methodology.
#### imatrix
<details>
<summary>Importance Matrix Details Here</summary>
I didn't have access to a big RAM rig fast enough so rawdogg'd a `Q8_0`
attention/dense layers/shared experts and `IQ4_KS_R4` all routed expert
layers as that is about all I could fit into RAM+VRAM. I used that cold
turkey to make the imatrix in this repo and used for these quants. Not
great, not terrible. Just cuz I was feelin' fucky I went with 2k context
despite no clear evidence imo of there being any
significant measurable benifit. Call me superstitious,
but I also threw in *some* extra material in the imatrix corpus from:
[turboderp-org/exllamav3](https://github.com/turboderp-org/exllamav3/tree/master/exllamav3/conversion/standard_cal_data)'s
`standard_cal_data` in addition to my usual `imatrix
calibration_data_v5_rc.txt` linked below.
```bash
cat calibration_data_v5_rc.txt > ubergarm-imatrix-calibration-corpus.txt
cat c4.utf8 >> ubergarm-imatrix-calibration-corpus.txt
cat code.utf8 >> ubergarm-imatrix-calibration-corpus.txt
cat multilingual.utf8 >> ubergarm-imatrix-calibration-corpus.txt
cat technical.utf8 >> ubergarm-imatrix-calibration-corpus.txt
cat tiny.utf8 >> ubergarm-imatrix-calibration-corpus.txt
# Do *not* use the wiki.utf8 to avoid potential over-fitting on wiki.test.raw common test corpus
./build/bin/llama-imatrix \
--verbosity 1 \
--layer-similarity \
-m /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/DeepSeek-R1-0528-IQ4_KS_R4.gguf \
-f ubergarm-imatrix-calibration-corpus.txt \
-o /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/imatrix-DeepSeek-R1-0528-IQ4_KS_R4-mixed-calibration-corpus-2k-ctx-size.dat \
--ctx-size 2048 \
-ctk q8_0 \
-mla 3 -fa \
-amb 512 \
-fmoe \
--n-gpu-layers 63 \
-ts 48,48 \
-ot "blk\.(3|4|5|6|7|8)\.ffn_.*=CUDA0" \
-ot "blk\.(9|10|11|12|13|14)\.ffn_.*=CUDA1" \
-ot exps=CPU \
--threads 24
```
</details>
#### Quant Cookers Secret Recipe
<details>
<summary>Secret Recipe</summary>
```bash
#!/usr/bin/env bash
custom="
# Token embedding and output tensors (GPU)
token_embd\.weight=q8_0
output\.weight=q8_0
output_norm\.weight=q8_0
# First 3 dense layers (0-3) (GPU)
blk\.[0-2]\..*=q8_0
# All attention, weights, and bias tensors for MoE layers (3-60) (GPU)
blk\.[3-9]\.attn_.*=q8_0
blk\.[1-5][0-9]\.attn_.*=q8_0
blk\.60\.attn_.*=q8_0
blk\.[3-9]\.ffn_norm\.weight=q8_0
blk\.[1-5][0-9]\.ffn_norm\.weight=q8_0
blk\.60\.ffn_norm\.weight=q8_0
blk\.[3-9]\.exp_probs_b\.bias=q8_0
blk\.[1-5][0-9]\.exp_probs_b\.bias=q8_0
blk\.60\.exp_probs_b\.bias=q8_0
# Shared Experts (3-60) (GPU)
blk\.[3-9]\.ffn_down_shexp\.weight=q8_0
blk\.[1-5][0-9]\.ffn_down_shexp\.weight=q8_0
blk\.60\.ffn_down_shexp\.weight=q8_0
blk\.[3-9]\.ffn_(gate|up)_shexp\.weight=q8_0
blk\.[1-5][0-9]\.ffn_(gate|up)_shexp\.weight=q8_0
blk\.60\.ffn_(gate|up)_shexp\.weight=q8_0
# MoE Experts (3-60) (CPU)
blk\.[3-9]\.ffn_down_exps\.weight=iq4_ks_r4
blk\.[1-5][0-9]\.ffn_down_exps\.weight=iq4_ks_r4
blk\.60\.ffn_down_exps\.weight=iq4_ks_r4
blk\.[3-9]\.ffn_(gate|up)_exps\.weight=iq3_k_r4
blk\.[1-5][0-9]\.ffn_(gate|up)_exps\.weight=iq3_k_r4
blk\.60\.ffn_(gate|up)_exps\.weight=iq3_k_r4
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/imatrix-DeepSeek-R1-0528-IQ4_KS_R4-mixed-calibration-corpus-2k-ctx-size.dat \
/mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/DeepSeek-R1-256x21B-0528-BF16-00001-of-00030.gguf \
/mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/DeepSeek-R1-0528-IQ3_K_R4.gguf \
IQ3_K_R4 \
24
```
</details>
#### Perplexity
<details>
<summary>Perplexity Logs</summary>
```bash
$ CUDA_VISIBLE_DEVICES="0," \
./build/bin/llama-perplexity \
--model /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/DeepSeek-R1-0528-IQ3_K_R4.gguf \
-ctk q8_0 \
-mla 3 -fa \
-amb 512 \
-fmoe \
--ctx-size 512 \
--ubatch-size 512 \
-f wiki.test.raw \
--seed 1337 \
--n-gpu-layers 63 \
--override-tensor exps=CPU \
--threads 24
echo TODO
```
</details>
#### Split
<details>
<summary>Split GGUF</summary>
*TODO*: Add key value metadata information before publishing.
```bash
$ ./build/bin/llama-gguf-split \
--dry-run \
--split \
--split-max-size 50G \
/mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/DeepSeek-R1-0528-IQ3_K_R4.gguf
/mnt/raid/hf/DeepSeek-R1-0528-GGUF/DeepSeek-R1-0528-IQ3_K_R4/DeepSeek-R1-0528-IQ3_K_R4
```
</details>
## References
* [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp/)
* [ik_llama.cpp Getting Started Guide](https://github.com/ikawrakow/ik_llama.cpp/discussions/258)
* [turboderp-org/exllamav3](https://github.com/turboderp-org/exllamav3/pull/26)
* [imatrix calibration_data_v5_rc.txt](https://gist.github.com/tristandruyen/9e207a95c7d75ddf37525d353e00659c#file-calibration_data_v5_rc-txt)
|
Lasorco/botsu | Lasorco | 2025-05-29T04:03:02Z | 0 | 1 | null | [
"stable-diffusion",
"text-to-image",
"safetensors",
"ja",
"license:other",
"region:us"
] | text-to-image | 2025-05-28T13:27:36Z | ---
tags:
- stable-diffusion
- text-to-image
- safetensors
language:
- ja
license: other
license_name: fair-ai-public-license-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
---
# ใใใใผใธ
ใใใซใใใใฎใฏไฝใใใฎ็็ฑใงๆฒกใซใชใฃใใใฎใงใ<br>
่ชฟๆดๅใฐใง็ตใใฃใฆใใใฎใง็ฒใๆฎใฃใฆใใพใ<br>
## Nonnette_v071DF_k1
Twitterใงไฝไพใใใใฆใใไธญใงไฝฟใฃใฆใฟใใใจใฎ่ฆๆใใใฃใใใฎ<br>
ๅ็จฎๅฎ้จใvpredใขใใซใ็ก็็ก็ใใผใธใใฆใใใฎใง่ฒ่ปขใณใใใใไธใซใใใๅฎๅฎใใพใใ<br>
<br>
ใฌใทใ<br>
Nonnette_v071DF_k1 = nonnette_kiwi_002 + kittenTower_v20VPred<br>
alpha 0.3,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,0.0,0.6,0.0,0.0,1.0,0.0,1.0,0.0,0.4,0.6,0.0<br>
<br>
nonnette_kiwi_002 = nonnette_kiwi000 + nonnette_kiwi_001_mat<br>
alpha 0.2,0.0,0.0,0.0,0.0,0.2,0.2,0.1,0.4,0.3,0.5,0.999,0.9,0.9,0.999,0.4,0.4,0.999,0.0,0.0<br>
<br>
nonnette_kiwi000 = kiwimixXL_v3 + ntr40obs31lm20kai_nullpo<br>
alpha 0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.8,0.0,0.0,0.0,0.0,0.0,0.0,0.0<br>
<br>
ntr40obs31lm20_nullpo = ntr40obs31lm20kai + noolpoAI_vpred_test0.4test3<br>
alpha 0.5,0.0,0.7,0.75,1.0,0.75,1.0,1.0,1.0,0.25,0.5,0.1,0.75,0.5,0.5,0.1,0.0,0.0,0.75,0.0<br>
<br>
ntr40obs31lm20kai = ntr40obs31 + ntr40obs31lm20<br>
alpha 0.0,0.5,0.15,0.75,0.5,0.7,0.65,0.5,0.4,0.3,0.3,0.4,0.9,0.7,0.4,0.6,0.8,0.7,0.2,0.7<br>
<br>
ntr40obs31lm20 = ntr40obs31 + lametta Lora<br>
alpha 1.0<br>
<br>
ntr40obs31 = ntrMIXIllustriousXL_v40 + obsessionIllustrious_v31<br>
alpha 0.542<br>
<br>
nonnette_kiwi_001_mat = nonnette_kiwi000 + (nonnette_v020L - matureRitualNoobai_v011)<br>
alpha 1<br>
<br>
nonnette_v020L = nonnette_v020 + lametta Lora<br>
alpha 1<br>
<br>
<br><br>
## Nonnette_v071DF_k2
ไธใฎNonnette_v071DF_k1ใฎkittenTower_v20VPredใkittenTower_v10EpsPredใซๅใๆฟใใฆใใผใธใ็ดใใใใฎ<br>
ใใผใธ็ด ๆใจใใฆใใผใธใใใฎใงBASEใจใใฏๅฅใฎใขใใซใจๅใๆฟใใใปใใ่ฏใใใใใใพใใ<br>
ใฌใทใ<br>
Nonnette_v071DF_k2 = nonnette_kiwi_002.fp16 + kittenTower_v10EpsPred<br>
alpha 0.0,0.9,0.6,0.0,0.9,0.0,0.0,0.9,0.4,0.6,0.2,0.1,0.25,0.25,0.25,0.25,0.25,0.25,0.25,0.3<br> |
svjack/hakoniwa_anime_wan2_1_models | svjack | 2025-05-29T03:58:35Z | 0 | 3 | null | [
"region:us"
] | null | 2025-05-28T08:38:04Z |
# hakoniwa_anime_wan2_1_models
- drived from https://civitai.com/models/1626197
* anime_wanvideo_T2V_example_02.json
- prompt
```txt
anime style ,high quality nature video featuring a red panda balancing on a bamboo stem while a bird lands on it's head, on the background there is a waterfall
```
- output
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/79SWnULz3splo2NchGnY-.mp4"></video>
* anime_wanvideo_480p_I2V_example_02.json
- Image

- prompt
```txt
anime style, portrays a serene anime-style scene with a tranquil yet slightly melancholic atmosphere. In the lower right corner, a young man with dark blue hair stands calmly, dressed in a long blue coat layered over a black turtleneck. His gaze is directed off to the side, adding a contemplative mood.
```
- out
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/dlbXRGcPxXP0FaOT-cGQg.mp4"></video>
]()
|
HiveMindSystems/swarm-worker-v1 | HiveMindSystems | 2025-05-29T03:58:01Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-29T03:56:22Z | # SwarmAI - Worker Agent v1
This is a minimal SwarmAI worker agent designed to process incoming signals and respond based on a primitive logic structure.
## Behavior
- If the text contains "data" โ processes data.
- If it contains "order" โ executes command.
- Otherwise โ idle.
## Vision
Workers are modular, task-focused agents that can evolve, mutate, and specialize over time within the SwarmAI ecosystem.
## Usage
```bash
python worker_agent.py
```
## License
MIT
|
mradermacher/Grand-Horror-25B-Godzillas-Wicked-Sister-GGUF | mradermacher | 2025-05-29T03:57:42Z | 38 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama3",
"llama-3",
"en",
"base_model:DavidAU/Grand-Horror-25B-Godzillas-Wicked-Sister",
"base_model:quantized:DavidAU/Grand-Horror-25B-Godzillas-Wicked-Sister",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-26T19:38:22Z | ---
base_model: DavidAU/Grand-Horror-25B-Godzillas-Wicked-Sister
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
- llama3
- llama-3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DavidAU/Grand-Horror-25B-Godzillas-Wicked-Sister
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Grand-Horror-25B-Godzillas-Wicked-Sister-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Grand-Horror-25B-Godzillas-Wicked-Sister-GGUF/resolve/main/Grand-Horror-25B-Godzillas-Wicked-Sister.Q2_K.gguf) | Q2_K | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/Grand-Horror-25B-Godzillas-Wicked-Sister-GGUF/resolve/main/Grand-Horror-25B-Godzillas-Wicked-Sister.IQ3_XS.gguf) | IQ3_XS | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Grand-Horror-25B-Godzillas-Wicked-Sister-GGUF/resolve/main/Grand-Horror-25B-Godzillas-Wicked-Sister.Q3_K_S.gguf) | Q3_K_S | 11.1 | |
| [GGUF](https://huggingface.co/mradermacher/Grand-Horror-25B-Godzillas-Wicked-Sister-GGUF/resolve/main/Grand-Horror-25B-Godzillas-Wicked-Sister.IQ3_S.gguf) | IQ3_S | 11.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Grand-Horror-25B-Godzillas-Wicked-Sister-GGUF/resolve/main/Grand-Horror-25B-Godzillas-Wicked-Sister.IQ3_M.gguf) | IQ3_M | 11.5 | |
| [GGUF](https://huggingface.co/mradermacher/Grand-Horror-25B-Godzillas-Wicked-Sister-GGUF/resolve/main/Grand-Horror-25B-Godzillas-Wicked-Sister.Q3_K_M.gguf) | Q3_K_M | 12.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Grand-Horror-25B-Godzillas-Wicked-Sister-GGUF/resolve/main/Grand-Horror-25B-Godzillas-Wicked-Sister.Q3_K_L.gguf) | Q3_K_L | 13.3 | |
| [GGUF](https://huggingface.co/mradermacher/Grand-Horror-25B-Godzillas-Wicked-Sister-GGUF/resolve/main/Grand-Horror-25B-Godzillas-Wicked-Sister.IQ4_XS.gguf) | IQ4_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/Grand-Horror-25B-Godzillas-Wicked-Sister-GGUF/resolve/main/Grand-Horror-25B-Godzillas-Wicked-Sister.Q4_K_S.gguf) | Q4_K_S | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Grand-Horror-25B-Godzillas-Wicked-Sister-GGUF/resolve/main/Grand-Horror-25B-Godzillas-Wicked-Sister.Q4_K_M.gguf) | Q4_K_M | 15.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Grand-Horror-25B-Godzillas-Wicked-Sister-GGUF/resolve/main/Grand-Horror-25B-Godzillas-Wicked-Sister.Q5_K_S.gguf) | Q5_K_S | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/Grand-Horror-25B-Godzillas-Wicked-Sister-GGUF/resolve/main/Grand-Horror-25B-Godzillas-Wicked-Sister.Q5_K_M.gguf) | Q5_K_M | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/Grand-Horror-25B-Godzillas-Wicked-Sister-GGUF/resolve/main/Grand-Horror-25B-Godzillas-Wicked-Sister.Q6_K.gguf) | Q6_K | 20.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Grand-Horror-25B-Godzillas-Wicked-Sister-GGUF/resolve/main/Grand-Horror-25B-Godzillas-Wicked-Sister.Q8_0.gguf) | Q8_0 | 26.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Cloudmaster/Llama-3.2-3B-HF-hqq-06 | Cloudmaster | 2025-05-29T03:51:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"hqq",
"region:us"
] | text-generation | 2025-05-29T03:47:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
second-state/SmolVLM2-256M-Video-Instruct-GGUF | second-state | 2025-05-29T03:40:46Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"smolvlm",
"image-text-to-text",
"en",
"base_model:HuggingFaceTB/SmolVLM2-256M-Video-Instruct",
"base_model:quantized:HuggingFaceTB/SmolVLM2-256M-Video-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-05-29T03:31:48Z | ---
base_model: HuggingFaceTB/SmolVLM2-256M-Video-Instruct
library_name: transformers
license: apache-2.0
model_creator: HuggingFaceTB
model_name: SmolVLM2-256M-Video-Instruct
quantized_by: Second State Inc.
pipeline_tag: image-text-to-text
language:
- en
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# SmolVLM2-256M-Video-Instruct-GGUF
## Original Model
[HuggingFaceTB/SmolVLM2-256M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-256M-Video-Instruct)
## Run with LlamaEdge
- LlamaEdge version: [v0.21.0](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.21.0) and above
- Prompt template
- Prompt type: `smol-vision`
- Prompt string
```text
<|im_start|>
User: {user_message_1}<image>
Assistant: {assistant_message_1}
User: {user_message_2}<image>
Assistant:
```
- Context size: `2048`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:SmolVLM2-256M-Video-Instruct-Q5_K_M.gguf \
llama-api-server.wasm \
--prompt-template smol-vision \
--llava-mmproj SmolVLM2-256M-Video-Instruct-mmproj-f16.gguf \
--model-name SmolVLM2-256M-Video-Instruct \
--ctx-size 2048
```
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [SmolVLM2-256M-Video-Instruct-Q2_K.gguf](https://huggingface.co/second-state/SmolVLM2-256M-Video-Instruct-GGUF/blob/main/SmolVLM2-256M-Video-Instruct-Q2_K.gguf) | Q2_K | 2 | 104 MB| smallest, significant quality loss - not recommended for most purposes |
| [SmolVLM2-256M-Video-Instruct-Q3_K_L.gguf](https://huggingface.co/second-state/SmolVLM2-256M-Video-Instruct-GGUF/blob/main/SmolVLM2-256M-Video-Instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 114 MB| small, substantial quality loss |
| [SmolVLM2-256M-Video-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/SmolVLM2-256M-Video-Instruct-GGUF/blob/main/SmolVLM2-256M-Video-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 110 MB| very small, high quality loss |
| [SmolVLM2-256M-Video-Instruct-Q3_K_S.gguf](https://huggingface.co/second-state/SmolVLM2-256M-Video-Instruct-GGUF/blob/main/SmolVLM2-256M-Video-Instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 104 MB| very small, high quality loss |
| [SmolVLM2-256M-Video-Instruct-Q4_0.gguf](https://huggingface.co/second-state/SmolVLM2-256M-Video-Instruct-GGUF/blob/main/SmolVLM2-256M-Video-Instruct-Q4_0.gguf) | Q4_0 | 4 | 108 MB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [SmolVLM2-256M-Video-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/SmolVLM2-256M-Video-Instruct-GGUF/blob/main/SmolVLM2-256M-Video-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 125 MB| medium, balanced quality - recommended |
| [SmolVLM2-256M-Video-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/SmolVLM2-256M-Video-Instruct-GGUF/blob/main/SmolVLM2-256M-Video-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 122 MB| small, greater quality loss |
| [SmolVLM2-256M-Video-Instruct-Q5_0.gguf](https://huggingface.co/second-state/SmolVLM2-256M-Video-Instruct-GGUF/blob/main/SmolVLM2-256M-Video-Instruct-Q5_0.gguf) | Q5_0 | 5 | 125 MB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [SmolVLM2-256M-Video-Instruct-Q5_K_M.gguf](https://huggingface.co/second-state/SmolVLM2-256M-Video-Instruct-GGUF/blob/main/SmolVLM2-256M-Video-Instruct-Q5_K_M.gguf) | Q5_K_M | 5 | 133 MB| large, very low quality loss - recommended |
| [SmolVLM2-256M-Video-Instruct-Q5_K_S.gguf](https://huggingface.co/second-state/SmolVLM2-256M-Video-Instruct-GGUF/blob/main/SmolVLM2-256M-Video-Instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 131 MB| large, low quality loss - recommended |
| [SmolVLM2-256M-Video-Instruct-Q6_K.gguf](https://huggingface.co/second-state/SmolVLM2-256M-Video-Instruct-GGUF/blob/main/SmolVLM2-256M-Video-Instruct-Q6_K.gguf) | Q6_K | 6 | 169 MB| very large, extremely low quality loss |
| [SmolVLM2-256M-Video-Instruct-Q8_0.gguf](https://huggingface.co/second-state/SmolVLM2-256M-Video-Instruct-GGUF/blob/main/SmolVLM2-256M-Video-Instruct-Q8_0.gguf) | Q8_0 | 8 | 175 MB| very large, extremely low quality loss - not recommended |
| [SmolVLM2-256M-Video-Instruct-f16.gguf](https://huggingface.co/second-state/SmolVLM2-256M-Video-Instruct-GGUF/blob/main/SmolVLM2-256M-Video-Instruct-f16.gguf) | f16 | 16 | 328 MB| |
| [SmolVLM2-256M-Video-Instruct-mmproj-f16.gguf](https://huggingface.co/second-state/SmolVLM2-256M-Video-Instruct-GGUF/blob/main/SmolVLM2-256M-Video-Instruct-mmproj-f16.gguf) | f16 | 16 | 190 MB| |
*Quantized with llama.cpp b5501* |
2yunadaaa/llama3.1-8b-3kingdoms-instruct-16bit | 2yunadaaa | 2025-05-29T03:40:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T03:33:35Z | ---
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** 2yunadaaa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
salma-remyx/llava-1.5-7b-hf-instruct-trl-sft-spacellava_openspaces_3epoch_a256_r128 | salma-remyx | 2025-05-29T03:36:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:llava-hf/llava-1.5-7b-hf",
"base_model:finetune:llava-hf/llava-1.5-7b-hf",
"endpoints_compatible",
"region:us"
] | null | 2025-05-20T15:51:42Z | ---
base_model: llava-hf/llava-1.5-7b-hf
library_name: transformers
model_name: llava-1.5-7b-hf-instruct-trl-sft-spacellava_openspaces_3epoch_a256_r128
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for llava-1.5-7b-hf-instruct-trl-sft-spacellava_openspaces_3epoch_a256_r128
This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="salma-remyx/llava-1.5-7b-hf-instruct-trl-sft-spacellava_openspaces_3epoch_a256_r128", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/smellslikeml/llava-1.5-7b-hf-instruct-trl-sft-spacellava_openspaces/runs/y13tzuom)
This model was trained with SFT.
### Framework versions
- TRL: 0.13.0.dev0
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu121
- Datasets: 3.1.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Novigrad/dqn-SpaceInvadersNoFrameskip-v4 | Novigrad | 2025-05-29T03:30:27Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-29T03:29:48Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 739.50 +/- 203.27
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Novigrad -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Novigrad -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Novigrad
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
mradermacher/Gemma-The-Writer-9B-GGUF | mradermacher | 2025-05-29T03:20:51Z | 64 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"gemma2",
"en",
"base_model:DavidAU/Gemma-The-Writer-9B",
"base_model:quantized:DavidAU/Gemma-The-Writer-9B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-26T23:14:58Z | ---
base_model: DavidAU/Gemma-The-Writer-9B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
- gemma2
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DavidAU/Gemma-The-Writer-9B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Gemma-The-Writer-9B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-9B-GGUF/resolve/main/Gemma-The-Writer-9B.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-9B-GGUF/resolve/main/Gemma-The-Writer-9B.IQ3_XS.gguf) | IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-9B-GGUF/resolve/main/Gemma-The-Writer-9B.IQ3_S.gguf) | IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-9B-GGUF/resolve/main/Gemma-The-Writer-9B.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-9B-GGUF/resolve/main/Gemma-The-Writer-9B.IQ3_M.gguf) | IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-9B-GGUF/resolve/main/Gemma-The-Writer-9B.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-9B-GGUF/resolve/main/Gemma-The-Writer-9B.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-9B-GGUF/resolve/main/Gemma-The-Writer-9B.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-9B-GGUF/resolve/main/Gemma-The-Writer-9B.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-9B-GGUF/resolve/main/Gemma-The-Writer-9B.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-9B-GGUF/resolve/main/Gemma-The-Writer-9B.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-9B-GGUF/resolve/main/Gemma-The-Writer-9B.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-9B-GGUF/resolve/main/Gemma-The-Writer-9B.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-9B-GGUF/resolve/main/Gemma-The-Writer-9B.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-9B-GGUF/resolve/main/Gemma-The-Writer-9B.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Aidana2007/SportBot | Aidana2007 | 2025-05-29T03:20:16Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-23T19:30:05Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/L3-Darker-Planet-12.15B-i1-GGUF | mradermacher | 2025-05-29T03:19:06Z | 1 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-3",
"en",
"base_model:DavidAU/L3-Darker-Planet-12.15B",
"base_model:quantized:DavidAU/L3-Darker-Planet-12.15B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-09-28T04:07:38Z | ---
base_model: DavidAU/L3-Darker-Planet-12.15B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
- llama-3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DavidAU/L3-Darker-Planet-12.15B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3-Darker-Planet-12.15B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-Darker-Planet-12.15B-i1-GGUF/resolve/main/L3-Darker-Planet-12.15B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-Darker-Planet-12.15B-i1-GGUF/resolve/main/L3-Darker-Planet-12.15B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-Darker-Planet-12.15B-i1-GGUF/resolve/main/L3-Darker-Planet-12.15B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Darker-Planet-12.15B-i1-GGUF/resolve/main/L3-Darker-Planet-12.15B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Darker-Planet-12.15B-i1-GGUF/resolve/main/L3-Darker-Planet-12.15B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Darker-Planet-12.15B-i1-GGUF/resolve/main/L3-Darker-Planet-12.15B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Darker-Planet-12.15B-i1-GGUF/resolve/main/L3-Darker-Planet-12.15B.i1-Q2_K.gguf) | i1-Q2_K | 4.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-Darker-Planet-12.15B-i1-GGUF/resolve/main/L3-Darker-Planet-12.15B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-Darker-Planet-12.15B-i1-GGUF/resolve/main/L3-Darker-Planet-12.15B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Darker-Planet-12.15B-i1-GGUF/resolve/main/L3-Darker-Planet-12.15B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-Darker-Planet-12.15B-i1-GGUF/resolve/main/L3-Darker-Planet-12.15B.i1-IQ3_S.gguf) | i1-IQ3_S | 5.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3-Darker-Planet-12.15B-i1-GGUF/resolve/main/L3-Darker-Planet-12.15B.i1-IQ3_M.gguf) | i1-IQ3_M | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Darker-Planet-12.15B-i1-GGUF/resolve/main/L3-Darker-Planet-12.15B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-Darker-Planet-12.15B-i1-GGUF/resolve/main/L3-Darker-Planet-12.15B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-Darker-Planet-12.15B-i1-GGUF/resolve/main/L3-Darker-Planet-12.15B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Darker-Planet-12.15B-i1-GGUF/resolve/main/L3-Darker-Planet-12.15B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.1 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3-Darker-Planet-12.15B-i1-GGUF/resolve/main/L3-Darker-Planet-12.15B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.1 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3-Darker-Planet-12.15B-i1-GGUF/resolve/main/L3-Darker-Planet-12.15B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.1 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3-Darker-Planet-12.15B-i1-GGUF/resolve/main/L3-Darker-Planet-12.15B.i1-Q4_0.gguf) | i1-Q4_0 | 7.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3-Darker-Planet-12.15B-i1-GGUF/resolve/main/L3-Darker-Planet-12.15B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3-Darker-Planet-12.15B-i1-GGUF/resolve/main/L3-Darker-Planet-12.15B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-Darker-Planet-12.15B-i1-GGUF/resolve/main/L3-Darker-Planet-12.15B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Darker-Planet-12.15B-i1-GGUF/resolve/main/L3-Darker-Planet-12.15B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Darker-Planet-12.15B-i1-GGUF/resolve/main/L3-Darker-Planet-12.15B.i1-Q6_K.gguf) | i1-Q6_K | 10.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
zjrwtxtechstudio/yijian_next_full01model | zjrwtxtechstudio | 2025-05-29T03:18:12Z | 0 | 0 | null | [
"pytorch",
"qwen3",
"unsloth",
"license:apache-2.0",
"region:us"
] | null | 2025-05-29T03:13:47Z | ---
license: apache-2.0
tags:
- unsloth
---
|
moonshotai/Kimi-Audio-7B-Instruct | moonshotai | 2025-05-29T03:01:58Z | 3,744 | 315 | kimi-audio | [
"kimi-audio",
"safetensors",
"audio",
"audio-language-model",
"speech-recognition",
"audio-understanding",
"text-to-speech",
"audio-generation",
"chat",
"custom_code",
"en",
"zh",
"license:mit",
"region:us"
] | text-to-speech | 2025-04-25T08:05:10Z | ---
license: mit
language:
- en
- zh
tags:
- audio
- audio-language-model
- speech-recognition
- audio-understanding
- text-to-speech
- audio-generation
- chat
library_name: kimi-audio
---
# Kimi-Audio
<p align="center">
<img src="https://raw.githubusercontent.com/MoonshotAI/Kimi-Audio/master/assets/kimia_logo.png" width="400"/> <!-- TODO: Replace with actual raw image URL from your repo -->
<p>
<p align="center">
<a href="https://huggingface.co/moonshotai/Kimi-Audio-7B">๐ค Kimi-Audio-7B</a> | <a href="https://huggingface.co/moonshotai/Kimi-Audio-7B-Instruct">๐ค Kimi-Audio-7B-Instruct </a> | <a href="https://raw.githubusercontent.com/MoonshotAI/Kimi-Audio/master/assets/kimia_report.pdf">๐ Paper</a>
</p>
## Introduction
We present Kimi-Audio, an open-source audio foundation model excelling in **audio understanding, generation, and conversation**. This repository hosts the model checkpoints for Kimi-Audio-7B-Instruct.
Kimi-Audio is designed as a universal audio foundation model capable of handling a wide variety of audio processing tasks within a single unified framework. Key features include:
* **Universal Capabilities:** Handles diverse tasks like speech recognition (ASR), audio question answering (AQA), audio captioning (AAC), speech emotion recognition (SER), sound event/scene classification (SEC/ASC) and end-to-end speech conversation.
* **State-of-the-Art Performance:** Achieves SOTA results on numerous audio benchmarks (see our [Technical Report](https://raw.githubusercontent.com/MoonshotAI/Kimi-Audio/master/assets/kimia_report.pdf)).
* **Large-Scale Pre-training:** Pre-trained on over 13 million hours of diverse audio data (speech, music, sounds) and text data.
* **Novel Architecture:** Employs a hybrid audio input (continuous acoustic + discrete semantic tokens) and an LLM core with parallel heads for text and audio token generation.
* **Efficient Inference:** Features a chunk-wise streaming detokenizer based on flow matching for low-latency audio generation.
For more details, please refer to our [GitHub Repository](https://github.com/MoonshotAI/Kimi-Audio) and [Technical Report](https://raw.githubusercontent.com/MoonshotAI/Kimi-Audio/master/assets/kimia_report.pdf).
## Requirements
We recommend that you build a Docker image to run the inference. After cloning the inference code, you can construct the image using the `docker build` command.
```bash
git clone https://github.com/MoonshotAI/Kimi-Audio
git submodule update --init
cd Kimi-Audio
docker build -t kimi-audio:v0.1 .
```
Alternatively, You can also use our pre-built image:
```bash
docker pull moonshotai/kimi-audio:v0.1
```
Or, you can install requirments by:
```bash
pip install -r requirements.txt
```
You may refer to the Dockerfile in case of any environment issues.
## Quickstart
This example demonstrates basic usage for generating text from audio (ASR) and generating both text and speech in a conversational turn using the `Kimi-Audio-7B-Instruct` model.
```python
import soundfile as sf
# Assuming the KimiAudio class is available after installation
from kimia_infer.api.kimia import KimiAudio
import torch # Ensure torch is imported if needed for device placement
# --- 1. Load Model ---
# Load the model from Hugging Face Hub
# Make sure you are logged in (`huggingface-cli login`) if the repo is private.
model_id = "moonshotai/Kimi-Audio-7B-Instruct" # Or "Kimi/Kimi-Audio-7B"
device = "cuda" if torch.cuda.is_available() else "cpu" # Example device placement
# Note: The KimiAudio class might handle model loading differently.
# You might need to pass the model_id directly or download checkpoints manually
# and provide the local path as shown in the original readme_kimia.md.
# Please refer to the main Kimi-Audio repository for precise loading instructions.
# Example assuming KimiAudio takes the HF ID or a local path:
try:
model = KimiAudio(model_path=model_id, load_detokenizer=True) # May need device argument
model.to(device) # Example device placement
except Exception as e:
print(f"Automatic loading from HF Hub might require specific setup.")
print(f"Refer to Kimi-Audio docs. Trying local path example (update path!). Error: {e}")
# Fallback example:
# model_path = "/path/to/your/downloaded/kimia-hf-ckpt" # IMPORTANT: Update this path if loading locally
# model = KimiAudio(model_path=model_path, load_detokenizer=True)
# model.to(device) # Example device placement
# --- 2. Define Sampling Parameters ---
sampling_params = {
"audio_temperature": 0.8,
"audio_top_k": 10,
"text_temperature": 0.0,
"text_top_k": 5,
"audio_repetition_penalty": 1.0,
"audio_repetition_window_size": 64,
"text_repetition_penalty": 1.0,
"text_repetition_window_size": 16,
}
# --- 3. Example 1: Audio-to-Text (ASR) ---
# TODO: Provide actual example audio files or URLs accessible to users
# E.g., download sample files first or use URLs
# wget https://path/to/your/asr_example.wav -O asr_example.wav
# wget https://path/to/your/qa_example.wav -O qa_example.wav
asr_audio_path = "asr_example.wav" # IMPORTANT: Make sure this file exists
qa_audio_path = "qa_example.wav" # IMPORTANT: Make sure this file exists
messages_asr = [
{"role": "user", "message_type": "text", "content": "Please transcribe the following audio:"},
{"role": "user", "message_type": "audio", "content": asr_audio_path}
]
# Generate only text output
# Note: Ensure the model object and generate method accept device placement if needed
_, text_output = model.generate(messages_asr, **sampling_params, output_type="text")
print(">>> ASR Output Text: ", text_output)
# Expected output: "่ฟๅนถไธๆฏๅๅซ๏ผ่ฟๆฏไธไธช็ฏ็ซ ็็ปๆ๏ผไนๆฏๆฐ็ฏ็ซ ็ๅผๅงใ" (Example)
# --- 4. Example 2: Audio-to-Audio/Text Conversation ---
messages_conversation = [
{"role": "user", "message_type": "audio", "content": qa_audio_path}
]
# Generate both audio and text output
wav_output, text_output = model.generate(messages_conversation, **sampling_params, output_type="both")
# Save the generated audio
output_audio_path = "output_audio.wav"
# Ensure wav_output is on CPU and flattened before saving
sf.write(output_audio_path, wav_output.detach().cpu().view(-1).numpy(), 24000) # Assuming 24kHz output
print(f">>> Conversational Output Audio saved to: {output_audio_path}")
print(">>> Conversational Output Text: ", text_output)
# Expected output: "A." (Example)
print("Kimi-Audio inference examples complete.")
```
## Citation
If you find Kimi-Audio useful in your research or applications, please cite our technical report:
```bibtex
@misc{kimi_audio_2024,
title={Kimi-Audio Technical Report},
author={Kimi Team},
year={2024},
eprint={arXiv:placeholder},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
The model is based and modified from [Qwen 2.5-7B](https://github.com/QwenLM/Qwen2.5). Code derived from Qwen2.5-7B is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). Other parts of the code are licensed under the [MIT License](https://opensource.org/licenses/MIT).
|
moonshotai/Kimi-Audio-7B | moonshotai | 2025-05-29T03:01:50Z | 372 | 41 | kimi-audio | [
"kimi-audio",
"safetensors",
"audio",
"audio-language-model",
"speech-recognition",
"audio-understanding",
"text-to-speech",
"audio-generation",
"chat",
"custom_code",
"en",
"zh",
"license:mit",
"region:us"
] | text-to-speech | 2025-04-25T12:22:40Z | ---
license: mit
language:
- en
- zh
tags:
- audio
- audio-language-model
- speech-recognition
- audio-understanding
- text-to-speech
- audio-generation
- chat
library_name: kimi-audio
---
# Kimi-Audio
<p align="center">
<img src="https://raw.githubusercontent.com/MoonshotAI/Kimi-Audio/master/assets/kimia_logo.png" width="400"/>
<p>
<p align="center">
<a href="https://huggingface.co/moonshotai/Kimi-Audio-7B">๐ค Kimi-Audio-7B</a> | <a href="https://huggingface.co/moonshotai/Kimi-Audio-7B-Instruct">๐ค Kimi-Audio-7B-Instruct </a> | <a href="https://raw.githubusercontent.com/MoonshotAI/Kimi-Audio/master/assets/kimia_report.pdf">๐ Paper</a>
</p>
## Introduction
We present Kimi-Audio, an open-source audio foundation model excelling in **audio understanding, generation, and conversation**. This repository hosts the model checkpoints for Kimi-Audio-7B.
Kimi-Audio is designed as a universal audio foundation model capable of handling a wide variety of audio processing tasks within a single unified framework. Key features include:
* **Universal Capabilities:** Handles diverse tasks like speech recognition (ASR), audio question answering (AQA), audio captioning (AAC), speech emotion recognition (SER), sound event/scene classification (SEC/ASC) and end-to-end speech conversation.
* **State-of-the-Art Performance:** Achieves SOTA results on numerous audio benchmarks (see our [Technical Report](https://raw.githubusercontent.com/MoonshotAI/Kimi-Audio/master/assets/kimia_report.pdf)).
* **Large-Scale Pre-training:** Pre-trained on over 13 million hours of diverse audio data (speech, music, sounds) and text data.
* **Novel Architecture:** Employs a hybrid audio input (continuous acoustic + discrete semantic tokens) and an LLM core with parallel heads for text and audio token generation.
* **Efficient Inference:** Features a chunk-wise streaming detokenizer based on flow matching for low-latency audio generation.
For more details, please refer to our [GitHub Repository](https://github.com/MoonshotAI/Kimi-Audio) and [Technical Report](https://raw.githubusercontent.com/MoonshotAI/Kimi-Audio/master/assets/kimia_report.pdf).
## Note
Kimi-Audio-7B is a base model without fine-tuning. So it cannot be used directly.
The base model is quite flexible, you can fine-tune it on any possible downstream tasks.
If you are looking for an out-of-the-box model, please refer to [Kimi-Audio-7B-Instruct](https://huggingface.co/moonshotai/Kimi-Audio-7B-Instruct).
## Citation
If you find Kimi-Audio useful in your research or applications, please cite our technical report:
```bibtex
@misc{kimi_audio_2024,
title={Kimi-Audio Technical Report},
author={Kimi Team},
year={2024},
eprint={arXiv:placeholder},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
The model is based and modified from [Qwen 2.5-7B](https://github.com/QwenLM/Qwen2.5). Code derived from Qwen2.5-7B is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). Other parts of the code are licensed under the [MIT License](https://opensource.org/licenses/MIT).
|
quatatak/kazroberta-finetuned-pos-halved | quatatak | 2025-05-29T03:01:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:kz-transformers/kaz-roberta-conversational",
"base_model:finetune:kz-transformers/kaz-roberta-conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-05-28T20:49:30Z | ---
library_name: transformers
license: apache-2.0
base_model: kz-transformers/kaz-roberta-conversational
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: kazroberta-finetuned-pos-halved
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kazroberta-finetuned-pos-halved
This model is a fine-tuned version of [kz-transformers/kaz-roberta-conversational](https://huggingface.co/kz-transformers/kaz-roberta-conversational) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0254
- Accuracy: 0.9961
- Precision: 0.9927
- Recall: 0.9921
- F1: 0.9924
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.0855 | 1.0 | 1238 | 0.0586 | 0.9809 | 0.9600 | 0.9647 | 0.9622 |
| 0.0435 | 2.0 | 2476 | 0.0345 | 0.9886 | 0.9745 | 0.9759 | 0.9752 |
| 0.0167 | 3.0 | 3714 | 0.0234 | 0.9932 | 0.9863 | 0.9878 | 0.9870 |
| 0.0075 | 4.0 | 4952 | 0.0229 | 0.9949 | 0.9902 | 0.9900 | 0.9901 |
| 0.0034 | 5.0 | 6190 | 0.0241 | 0.9954 | 0.9910 | 0.9910 | 0.9910 |
| 0.0019 | 6.0 | 7428 | 0.0247 | 0.9959 | 0.9928 | 0.9910 | 0.9919 |
| 0.0009 | 7.0 | 8666 | 0.0254 | 0.9961 | 0.9927 | 0.9921 | 0.9924 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.1
- Tokenizers 0.21.1
|
fahd200581/AIDEEEBAI | fahd200581 | 2025-05-29T02:55:31Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-29T02:10:56Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: AIDEEEBAI
---
# Aideeebai
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `AIDEEEBAI` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "AIDEEEBAI",
"lora_weights": "https://huggingface.co/fahd200581/AIDEEEBAI/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('fahd200581/AIDEEEBAI', weight_name='lora.safetensors')
image = pipeline('AIDEEEBAI').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 4000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/fahd200581/AIDEEEBAI/discussions) to add images that show off what youโve made with this LoRA.
|
songkey/epicphotogasm_ultimateFidelity | songkey | 2025-05-29T02:55:14Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:finetune:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2025-05-28T12:05:54Z | ---
library_name: diffusers
license: apache-2.0
base_model:
- stable-diffusion-v1-5/stable-diffusion-v1-5
---
Copied from: https://civitai.com/models/132632?modelVersionId=429454 |
thisisnotabot/my_awesome_IMDb_model | thisisnotabot | 2025-05-29T02:40:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-29T01:21:04Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_IMDb_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_IMDb_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2020
- Accuracy: 0.9300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2852 | 1.0 | 782 | 0.1936 | 0.9236 |
| 0.1412 | 2.0 | 1564 | 0.2020 | 0.9300 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
Zhang199/TinyLLaVA-Qwen2.5-3B-SigLIP | Zhang199 | 2025-05-29T02:27:29Z | 82 | 0 | null | [
"safetensors",
"tinyllava",
"image-text-to-text",
"conversational",
"arxiv:2405.11788",
"license:apache-2.0",
"region:us"
] | image-text-to-text | 2024-12-01T01:28:37Z | ---
license: apache-2.0
pipeline_tag: image-text-to-text
---
**<center><span style="font-size:2em;">TinyLLaVA</span></center>**
[](https://arxiv.org/abs/2405.11788)[](https://github.com/TinyLLaVA/TinyLLaVA_Factory)[](http://8843843nmph5.vicp.fun/#/)
Here, we introduce TinyLLaVA-Qwen2.5-3B-SigLIP , which is trained by the [TinyLLaVA Factory](https://github.com/TinyLLaVA/TinyLLaVA_Factory) codebase. For LLM and vision tower, we choose [Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B) and [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384), respectively.
### Usage
Execute the following test code:
```python
from tinyllava.eval.run_tiny_llava import eval_model
model_path = 'Zhang199/TinyLLaVA-Qwen2.5-3B-SigLIP'
prompt = "What are the things I should be cautious about when I visit here?"
image_file = "https://llava-vl.github.io/static/images/view.jpg"
conv_mode = "phi" # or llama, gemma, etc
args = type('Args', (), {
"model_path": model_path,
"model": None,
"query": prompt,
"conv_mode": conv_mode,
"image_file": image_file,
"sep": ",",
"temperature": 0,
"top_p": None,
"num_beams": 1,
"max_new_tokens": 512
})()
eval_model(args)
```
### Result
| model_name | vqav2 | gqa | sqa | textvqa | MM-VET | POPE | MME | MMMU |
| :----------------------------------------------------------: | ----- | ------- | ----- | ----- | ------- | ----- | ------ | ------ |
| [LLaVA-1.5-7B](https://huggingface.co/llava-hf/llava-1.5-7b-hf) | 78.5 | 62.0 | 66.8 | 58.2 | 30.5 | 85.9 | 1510.7 | - |
| [bczhou/TinyLLaVA-3.1B](https://huggingface.co/bczhou/TinyLLaVA-3.1B) (our legacy model) | 79.9 | 62.0 | 69.1 | 59.1 | 32.0 | 86.4 | 1464.9 | - |
| [tinyllava/TinyLLaVA-Gemma-SigLIP-2.4B](https://huggingface.co/tinyllava/TinyLLaVA-Gemma-SigLIP-2.4B) | 78.4 | 61.6 | 64.4 | 53.6 | 26.9 | 86.4 | 1339.0 | 31.7 |
| [tinyllava/TinyLLaVA-Phi-2-SigLIP-3.1B](https://huggingface.co/tinyllava/TinyLLaVA-Phi-2-SigLIP-3.1B) | 80.1 | 62.1 | 73.0 | 60.3 | 37.5 | 87.2 | 1466.4 | 38.4 |
| [Zhang199/TinyLLaVA-Qwen2-0.5B-SigLIP](https://huggingface.co/Zhang199/TinyLLaVA-Qwen2-0.5B-SigLIP) | 72.33 | 55.84 | 60.14 | 45.17 | 19.5 | 86.59 | 1153 | 29.7 |
| [Zhang199/TinyLLaVA-Qwen2.5-3B-SigLIP](https://huggingface.co/Zhang199/TinyLLaVA-Qwen2.5-3B-SigLIP) | 79.4 | 62.5 | 74.1 | 58.3 | 34.8 | 87.4 | 1438.7 | 39.9 |
P.S. [TinyLLaVA Factory](https://github.com/TinyLLaVA/TinyLLaVA_Factory) is an open-source modular codebase for small-scale LMMs with a focus on simplicity of code implementations, extensibility of new features, and reproducibility of training results. This code repository provides standard training&evaluating pipelines, flexible data preprocessing&model configurations, and easily extensible architectures. Users can customize their own LMMs with minimal coding effort and less coding mistake.
TinyLLaVA Factory integrates a suite of cutting-edge models and methods.
- LLM currently supports OpenELM, TinyLlama, StableLM, Qwen, Gemma, Phi, and Qwen2.
- Vision tower currently supports CLIP, SigLIP, Dino, and combination of CLIP and Dino.
- Connector currently supports MLP, Qformer, and Resampler.
|
Skywork/Skywork-OR1-7B-Preview | Skywork | 2025-05-29T02:26:55Z | 8,184 | 13 | null | [
"safetensors",
"qwen2",
"dataset:Skywork/Skywork-OR1-RL-Data",
"arxiv:2505.22312",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"region:us"
] | null | 2025-04-13T08:49:45Z | ---
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
datasets:
- Skywork/Skywork-OR1-RL-Data
---
<div align="center">
# ๐ค Skywork-OR1 (Open Reasoner 1)
</div>
<div>
<br>
<div align="center">
[](https://huggingface.co/collections/Skywork/skywork-or1-67fa1bcb41b436ef2def76b9)
[](https://huggingface.co/datasets/Skywork/Skywork-OR1-RL-Data)
[](https://github.com/SkyworkAI/Skywork-OR1)
[](https://capricious-hydrogen-41c.notion.site/Skywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680)
[](https://github.com/SkyworkAI/Skywork-OR1/stargazers)
[](https://github.com/SkyworkAI/Skywork-OR1/fork)
</div>
## ๐ฅ News
- **April 13, 2025**: We release the **`Skywork-OR1`** (Open Reasoner 1) series of models, including **`Skywork-OR1-Math-7B`**, **`Skywork-OR1-32B-Preview`**, and **`Skywork-OR1-7B-Preview`**. We open-source
- ๐ค Model weights: [`Skywork-OR1-Math-7B`](https://huggingface.co/Skywork/Skywork-OR1-Math-7B), [`Skywork-OR1-32B-Preview`](https://huggingface.co/Skywork/Skywork-OR1-32B-Preview), [`Skywork-OR1-7B-Preview`](https://huggingface.co/Skywork/Skywork-OR1-7B-Preview)
- ๐ค Training data: [`Skywork-OR1-RL-Data`](https://huggingface.co/datasets/Skywork/Skywork-OR1-RL-Data)
- ๐งโ๐ป Code: [`Skywork-OR1`](https://github.com/SkyworkAI/Skywork-OR1)
- We also release a [Notion Blog](https://capricious-hydrogen-41c.notion.site/Skywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680) to share detailed training recipes and extensive experimental results, analysis, and insights, dedicated to helping the community to better research, understand, and push the frontier of open reasoning models.
## ๐ Overview
<div align="center">
<img src="./assets/skywork-or1-math-7b-multi-stage.png" width="60%"/>
<sub>The AIME24 scores versus training steps of Skywork-OR1-Math-7B in our multi-stage training pipeline.</sub>
</div>
The **`Skywork-OR1`** (Open Reasoner 1) model series consists of powerful math and code reasoning models trained using large-scale rule-based reinforcement learning with carefully designed datasets and training recipes. This series includes two general-purpose reasoning modelsl, **`Skywork-OR1-7B-Preview`** and **`Skywork-OR1-32B-Preview`**, along with a math-specialized model, **`Skywork-OR1-Math-7B`**.
- **[`Skywork-OR1-Math-7B`](https://huggingface.co/Skywork/Skywork-OR1-Math-7B)** is specifically optimized for mathematical reasoning, scoring **69.8** on AIME24 and **52.3** on AIME25 โ well ahead of all models of similar size.
- **[`Skywork-OR1-32B-Preview`](https://huggingface.co/Skywork/Skywork-OR1-32B-Preview)** delivers the 671B-parameter Deepseek-R1 performance on math tasks (AIME24 and AIME25) and coding tasks (LiveCodeBench).
- **[`Skywork-OR1-7B-Preview`](https://huggingface.co/Skywork/Skywork-OR1-7B-Preview)** outperforms all similarly sized models in both math and coding scenarios.
The final release version will be available in two weeks.
## ๐ Evaluation
<div align="center">
<img src="./assets/32b_perf.png" width="75%"/>
<img src="./assets/7b_perf.png" width="75%"/>
</div>
</div>
We evaluate our models on AIME24, AIME25, and LiveCodeBench. Instead of using Pass@1, which is common in prior work, we introduce Avg@K as the primary metric. This metric robustly measures a model's average performance across K independent attempts, reducing the impact of randomness and enhancing the reliability of the results. We believe that Avg@K provides a better reflection of a model's stability and reasoning consistency.
We include the detailed results in the following table.
| Model | AIME24 (Avg@32) | AIME25 (Avg@32) | LiveCodeBench (8/1/24-2/1/25) (Avg@4) |
|-------|---------|---------|--------------|
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 39.2| 37.6 |
| Light-R1-7B-DS | 59.1 | 44.3| 39.5 |
| DeepSeek-R1-Distill-Qwen-32B | 72.9 | 59.0| 57.2 |
| TinyR1-32B-Preview | 78.1| 65.3| 61.6 |
| QwQ-32B | 79.5 | 65.3| 61.6 |
| DeepSeek-R1 | 79.8 | 70.0| 65.9 |
| **Skywork-OR1-Math-7B** | 69.8 | 52.3 | 43.6 |
| **Skywork-OR1-7B-Preview** | 63.6 | 45.8 | 43.9 |
| **Skywork-OR1-32B-Preview** | 79.7 | 69.0 | 63.9 |
## โ๏ธ Training Recipe
We offer a brief overview of our data and training pipeline below. For more details, please refer to our Notion Blog [here](https://capricious-hydrogen-41c.notion.site/Skywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680).
### Data
- We select, clean, and curate **a dataset of 110K verifiable, challenging, and diverse math problems and 14K coding questions** from open-source datasets.
- We perform **model-aware difficulty estimation** for each problem and model and conduct **rigorous quality assessment prior to training** to ensure training efficiency and effectiveness.
### Training
We develop a customized version of GRPO that leverages both data-wise and training-wise improvements:
- We perform both **offline and online difficulty-based filtering** and **rejection sampling** to improve training efficiency.
- We incorporate a **multi-stage training pipeline** coupled with **adaptive entropy control** and other techniques to enhance exploration and stability.
## ๐ Technical Report
Our technical report will be released soon. Stay tuned!
## ๐ Acknowledgements
- Both of our models are trained on top of [`DeepSeek-R1-Distill-Qwen-7B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) and [`DeepSeek-R1-Distill-Qwen-32B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B).
- Both models are trained using [a custom fork](https://github.com/SkyworkAI/Skywork-OR1) of the wonderful [`verl`](https://github.com/volcengine/verl) project.
## ๐ Citation
Please cite the following:
```bibtex
@article{he2025skywork,
title={Skywork Open Reasoner 1 Technical Report},
author={He, Jujie and Liu, Jiacai and Liu, Chris Yuhao and Yan, Rui and Wang, Chaojie and Cheng, Peng and Zhang, Xiaoyu and Zhang, Fuxiang and Xu, Jiacheng and Shen, Wei and Li, Siyuan and Zeng, Liang and Wei, Tianwen and Cheng, Cheng and An, Bo and Liu, Yang and Zhou, Yahui},
journal={arXiv preprint arXiv:2505.22312},
year={2025}
}
@misc{skywork-or1-2025,
title={Skywork Open Reasoner Series},
author = {He, Jujie and Liu, Jiacai and Liu, Chris Yuhao and Yan, Rui and Wang, Chaojie and Cheng, Peng and Zhang, Xiaoyu and Zhang, Fuxiang and Xu, Jiacheng and Shen, Wei and Li, Siyuan and Zeng, Liang and Wei, Tianwen and Cheng, Cheng and Liu, Yang and Zhou, Yahui},
howpublished={\url{https://capricious-hydrogen-41c.notion.site/Skywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680}},
note={Notion Blog},
year={2025}
}
``` |
Skywork/Skywork-OR1-Math-7B | Skywork | 2025-05-29T02:26:36Z | 2,819 | 11 | null | [
"safetensors",
"qwen2",
"dataset:Skywork/Skywork-OR1-RL-Data",
"arxiv:2505.22312",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"region:us"
] | null | 2025-04-12T09:00:27Z | ---
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
datasets:
- Skywork/Skywork-OR1-RL-Data
---
<div align="center">
# ๐ค Skywork-OR1 (Open Reasoner 1)
</div>
<div>
<br>
<div align="center">
[](https://huggingface.co/collections/Skywork/skywork-or1-67fa1bcb41b436ef2def76b9)
[](https://huggingface.co/datasets/Skywork/Skywork-OR1-RL-Data)
[](https://github.com/SkyworkAI/Skywork-OR1)
[](https://capricious-hydrogen-41c.notion.site/Skywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680)
[](https://github.com/SkyworkAI/Skywork-OR1/stargazers)
[](https://github.com/SkyworkAI/Skywork-OR1/fork)
</div>
## ๐ฅ News
- **April 13, 2025**: We release the **`Skywork-OR1`** (Open Reasoner 1) series of models, including **`Skywork-OR1-Math-7B`**, **`Skywork-OR1-32B-Preview`**, and **`Skywork-OR1-7B-Preview`**. We open-source
- ๐ค Model weights: [`Skywork-OR1-Math-7B`](https://huggingface.co/Skywork/Skywork-OR1-Math-7B), [`Skywork-OR1-32B-Preview`](https://huggingface.co/Skywork/Skywork-OR1-32B-Preview), [`Skywork-OR1-7B-Preview`](https://huggingface.co/Skywork/Skywork-OR1-7B-Preview)
- ๐ค Training data: [`Skywork-OR1-RL-Data`](https://huggingface.co/datasets/Skywork/Skywork-OR1-RL-Data)
- ๐งโ๐ป Code: [`Skywork-OR1`](https://github.com/SkyworkAI/Skywork-OR1)
- We also release a [Notion Blog](https://capricious-hydrogen-41c.notion.site/Skywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680) to share detailed training recipes and extensive experimental results, analysis, and insights, dedicated to helping the community to better research, understand, and push the frontier of open reasoning models.
## ๐ Overview
<div align="center">
<img src="./assets/skywork-or1-math-7b-multi-stage.png" width="60%"/>
<sub>The AIME24 scores versus training steps of Skywork-OR1-Math-7B in our multi-stage training pipeline.</sub>
</div>
The **`Skywork-OR1`** (Open Reasoner 1) model series consists of powerful math and code reasoning models trained using large-scale rule-based reinforcement learning with carefully designed datasets and training recipes. This series includes two general-purpose reasoning modelsl, **`Skywork-OR1-7B-Preview`** and **`Skywork-OR1-32B-Preview`**, along with a math-specialized model, **`Skywork-OR1-Math-7B`**.
- **[`Skywork-OR1-Math-7B`](https://huggingface.co/Skywork/Skywork-OR1-Math-7B)** is specifically optimized for mathematical reasoning, scoring **69.8** on AIME24 and **52.3** on AIME25 โ well ahead of all models of similar size.
- **[`Skywork-OR1-32B-Preview`](https://huggingface.co/Skywork/Skywork-OR1-32B-Preview)** delivers the 671B-parameter Deepseek-R1 performance on math tasks (AIME24 and AIME25) and coding tasks (LiveCodeBench).
- **[`Skywork-OR1-7B-Preview`](https://huggingface.co/Skywork/Skywork-OR1-7B-Preview)** outperforms all similarly sized models in both math and coding scenarios.
The final release version will be available in two weeks.
## ๐ Evaluation
<div align="center">
<img src="./assets/32b_perf.png" width="75%"/>
<img src="./assets/7b_perf.png" width="75%"/>
</div>
</div>
We evaluate our models on AIME24, AIME25, and LiveCodeBench. Instead of using Pass@1, which is common in prior work, we introduce Avg@K as the primary metric. This metric robustly measures a model's average performance across K independent attempts, reducing the impact of randomness and enhancing the reliability of the results. We believe that Avg@K provides a better reflection of a model's stability and reasoning consistency.
We include the detailed results in the following table.
| Model | AIME24 (Avg@32) | AIME25 (Avg@32) | LiveCodeBench (8/1/24-2/1/25) (Avg@4) |
|-------|---------|---------|--------------|
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 39.2| 37.6 |
| Light-R1-7B-DS | 59.1 | 44.3| 39.5 |
| DeepSeek-R1-Distill-Qwen-32B | 72.9 | 59.0| 57.2 |
| TinyR1-32B-Preview | 78.1| 65.3| 61.6 |
| QwQ-32B | 79.5 | 65.3| 61.6 |
| DeepSeek-R1 | 79.8 | 70.0| 65.9 |
| **Skywork-OR1-Math-7B** | 69.8 | 52.3 | 43.6 |
| **Skywork-OR1-7B-Preview** | 63.6 | 45.8 | 43.9 |
| **Skywork-OR1-32B-Preview** | 79.7 | 69.0 | 63.9 |
## โ๏ธ Training Recipe
We offer a brief overview of our data and training pipeline below. For more details, please refer to our Notion Blog [here](https://capricious-hydrogen-41c.notion.site/Skywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680).
### Data
- We select, clean, and curate **a dataset of 110K verifiable, challenging, and diverse math problems and 14K coding questions** from open-source datasets.
- We perform **model-aware difficulty estimation** for each problem and model and conduct **rigorous quality assessment prior to training** to ensure training efficiency and effectiveness.
### Training
We develop a customized version of GRPO that leverages both data-wise and training-wise improvements:
- We perform both **offline and online difficulty-based filtering** and **rejection sampling** to improve training efficiency.
- We incorporate a **multi-stage training pipeline** coupled with **adaptive entropy control** and other techniques to enhance exploration and stability.
## ๐ Technical Report
Our technical report will be released soon. Stay tuned!
## ๐ Acknowledgements
- Both of our models are trained on top of [`DeepSeek-R1-Distill-Qwen-7B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) and [`DeepSeek-R1-Distill-Qwen-32B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B).
- Both models are trained using [a custom fork](https://github.com/SkyworkAI/Skywork-OR1) of the wonderful [`verl`](https://github.com/volcengine/verl) project.
## ๐ Citation
Please cite the following:
```bibtex
@article{he2025skywork,
title={Skywork Open Reasoner 1 Technical Report},
author={He, Jujie and Liu, Jiacai and Liu, Chris Yuhao and Yan, Rui and Wang, Chaojie and Cheng, Peng and Zhang, Xiaoyu and Zhang, Fuxiang and Xu, Jiacheng and Shen, Wei and Li, Siyuan and Zeng, Liang and Wei, Tianwen and Cheng, Cheng and An, Bo and Liu, Yang and Zhou, Yahui},
journal={arXiv preprint arXiv:2505.22312},
year={2025}
}
@misc{skywork-or1-2025,
title={Skywork Open Reasoner Series},
author = {He, Jujie and Liu, Jiacai and Liu, Chris Yuhao and Yan, Rui and Wang, Chaojie and Cheng, Peng and Zhang, Xiaoyu and Zhang, Fuxiang and Xu, Jiacheng and Shen, Wei and Li, Siyuan and Zeng, Liang and Wei, Tianwen and Cheng, Cheng and Liu, Yang and Zhou, Yahui},
howpublished={\url{https://capricious-hydrogen-41c.notion.site/Skywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680}},
note={Notion Blog},
year={2025}
}
``` |
ModSpecialization/distilbert-base-uncased-fraud-classifer | ModSpecialization | 2025-05-29T02:26:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"dataset:ModSpecialization/Credit_Card_Transaction_Dataset",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-29T02:06:47Z | ---
library_name: transformers
license: apache-2.0
tags:
- text-classification
- transformers
- distilbert
datasets:
- ModSpecialization/Credit_Card_Transaction_Dataset
base_model:
- distilbert/distilbert-base-uncased
pipeline_tag: text-classification
metrics:
- accuracy
- f1
---
# DistilBERT Fraud Detection
This is the model card of a ๐ค transformers model that has been pushed on the Hub.
A fine-tuned `distilbert-base-uncased` model for binary classification (fraud detection).
- **Developed by:** Model Specialization Lab
- **Model type:** Classification
- **Finetuned from model :** distilbert/distilbert-base-uncased
## Evaluation Metrics
See [`eval_metrics.json`](./eval_metrics.json) for detailed metrics.
|
aiivanoff1982/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-long_sharp_skunk | aiivanoff1982 | 2025-05-29T02:21:15Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am long sharp skunk",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-06T08:40:02Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-long_sharp_skunk
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am long sharp skunk
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-long_sharp_skunk
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="aiivanoff1982/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-long_sharp_skunk", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
tsubaki0x01/NER-finetuning-BETO-PRO | tsubaki0x01 | 2025-05-29T02:17:17Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:NazaGara/NER-fine-tuned-BETO",
"base_model:finetune:NazaGara/NER-fine-tuned-BETO",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-05-06T02:33:06Z | ---
library_name: transformers
license: cc-by-4.0
base_model: NazaGara/NER-fine-tuned-BETO
tags:
- generated_from_trainer
model-index:
- name: NER-finetuning-BETO-PRO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NER-finetuning-BETO-PRO
This model is a fine-tuned version of [NazaGara/NER-fine-tuned-BETO](https://huggingface.co/NazaGara/NER-fine-tuned-BETO) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.52.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
mradermacher/L3.1-Dark-Planet-SpinFire-Uncensored-8B-i1-GGUF | mradermacher | 2025-05-29T02:09:41Z | 106 | 4 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama 3.1",
"llama-3",
"llama3",
"llama-3.1",
"en",
"base_model:DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B",
"base_model:quantized:DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-10T21:54:43Z | ---
base_model: DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
- llama 3.1
- llama-3
- llama3
- llama-3.1
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3.1-Dark-Planet-SpinFire-Uncensored-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3.1-Dark-Planet-SpinFire-Uncensored-8B-i1-GGUF/resolve/main/L3.1-Dark-Planet-SpinFire-Uncensored-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Dark-Planet-SpinFire-Uncensored-8B-i1-GGUF/resolve/main/L3.1-Dark-Planet-SpinFire-Uncensored-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Dark-Planet-SpinFire-Uncensored-8B-i1-GGUF/resolve/main/L3.1-Dark-Planet-SpinFire-Uncensored-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Dark-Planet-SpinFire-Uncensored-8B-i1-GGUF/resolve/main/L3.1-Dark-Planet-SpinFire-Uncensored-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Dark-Planet-SpinFire-Uncensored-8B-i1-GGUF/resolve/main/L3.1-Dark-Planet-SpinFire-Uncensored-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Dark-Planet-SpinFire-Uncensored-8B-i1-GGUF/resolve/main/L3.1-Dark-Planet-SpinFire-Uncensored-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Dark-Planet-SpinFire-Uncensored-8B-i1-GGUF/resolve/main/L3.1-Dark-Planet-SpinFire-Uncensored-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Dark-Planet-SpinFire-Uncensored-8B-i1-GGUF/resolve/main/L3.1-Dark-Planet-SpinFire-Uncensored-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Dark-Planet-SpinFire-Uncensored-8B-i1-GGUF/resolve/main/L3.1-Dark-Planet-SpinFire-Uncensored-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Dark-Planet-SpinFire-Uncensored-8B-i1-GGUF/resolve/main/L3.1-Dark-Planet-SpinFire-Uncensored-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Dark-Planet-SpinFire-Uncensored-8B-i1-GGUF/resolve/main/L3.1-Dark-Planet-SpinFire-Uncensored-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Dark-Planet-SpinFire-Uncensored-8B-i1-GGUF/resolve/main/L3.1-Dark-Planet-SpinFire-Uncensored-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Dark-Planet-SpinFire-Uncensored-8B-i1-GGUF/resolve/main/L3.1-Dark-Planet-SpinFire-Uncensored-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Dark-Planet-SpinFire-Uncensored-8B-i1-GGUF/resolve/main/L3.1-Dark-Planet-SpinFire-Uncensored-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Dark-Planet-SpinFire-Uncensored-8B-i1-GGUF/resolve/main/L3.1-Dark-Planet-SpinFire-Uncensored-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Dark-Planet-SpinFire-Uncensored-8B-i1-GGUF/resolve/main/L3.1-Dark-Planet-SpinFire-Uncensored-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Dark-Planet-SpinFire-Uncensored-8B-i1-GGUF/resolve/main/L3.1-Dark-Planet-SpinFire-Uncensored-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Dark-Planet-SpinFire-Uncensored-8B-i1-GGUF/resolve/main/L3.1-Dark-Planet-SpinFire-Uncensored-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Dark-Planet-SpinFire-Uncensored-8B-i1-GGUF/resolve/main/L3.1-Dark-Planet-SpinFire-Uncensored-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Dark-Planet-SpinFire-Uncensored-8B-i1-GGUF/resolve/main/L3.1-Dark-Planet-SpinFire-Uncensored-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Dark-Planet-SpinFire-Uncensored-8B-i1-GGUF/resolve/main/L3.1-Dark-Planet-SpinFire-Uncensored-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/MN-DARKEST-UNIVERSE-29B-GGUF | mradermacher | 2025-05-29T02:09:33Z | 8 | 0 | transformers | [
"transformers",
"gguf",
"mistral nemo",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"science fiction",
"romance",
"all genres",
"story",
"writing",
"vivid prosing",
"vivid writing",
"fiction",
"roleplaying",
"bfloat16",
"brainstorm 40x",
"swearing",
"mistral",
"mistral nemo instruct",
"mn",
"128k context",
"rp",
"horror",
"mergekit",
"merge",
"en",
"base_model:DavidAU/MN-DARKEST-UNIVERSE-29B",
"base_model:quantized:DavidAU/MN-DARKEST-UNIVERSE-29B",
"endpoints_compatible",
"region:us"
] | null | 2024-11-10T22:15:01Z | ---
base_model: DavidAU/MN-DARKEST-UNIVERSE-29B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mistral nemo
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- fiction writing
- story generation
- scene continue
- storytelling
- fiction story
- science fiction
- romance
- all genres
- story
- writing
- vivid prosing
- vivid writing
- fiction
- roleplaying
- bfloat16
- brainstorm 40x
- swearing
- mistral nemo
- mistral
- mistral nemo instruct
- mn
- 128k context
- rp
- horror
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DavidAU/MN-DARKEST-UNIVERSE-29B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MN-DARKEST-UNIVERSE-29B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MN-DARKEST-UNIVERSE-29B-GGUF/resolve/main/MN-DARKEST-UNIVERSE-29B.Q2_K.gguf) | Q2_K | 11.1 | |
| [GGUF](https://huggingface.co/mradermacher/MN-DARKEST-UNIVERSE-29B-GGUF/resolve/main/MN-DARKEST-UNIVERSE-29B.Q3_K_S.gguf) | Q3_K_S | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/MN-DARKEST-UNIVERSE-29B-GGUF/resolve/main/MN-DARKEST-UNIVERSE-29B.Q3_K_M.gguf) | Q3_K_M | 14.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MN-DARKEST-UNIVERSE-29B-GGUF/resolve/main/MN-DARKEST-UNIVERSE-29B.Q3_K_L.gguf) | Q3_K_L | 15.5 | |
| [GGUF](https://huggingface.co/mradermacher/MN-DARKEST-UNIVERSE-29B-GGUF/resolve/main/MN-DARKEST-UNIVERSE-29B.IQ4_XS.gguf) | IQ4_XS | 16.0 | |
| [GGUF](https://huggingface.co/mradermacher/MN-DARKEST-UNIVERSE-29B-GGUF/resolve/main/MN-DARKEST-UNIVERSE-29B.Q4_K_S.gguf) | Q4_K_S | 16.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MN-DARKEST-UNIVERSE-29B-GGUF/resolve/main/MN-DARKEST-UNIVERSE-29B.Q4_K_M.gguf) | Q4_K_M | 17.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MN-DARKEST-UNIVERSE-29B-GGUF/resolve/main/MN-DARKEST-UNIVERSE-29B.Q5_K_S.gguf) | Q5_K_S | 20.2 | |
| [GGUF](https://huggingface.co/mradermacher/MN-DARKEST-UNIVERSE-29B-GGUF/resolve/main/MN-DARKEST-UNIVERSE-29B.Q5_K_M.gguf) | Q5_K_M | 20.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-DARKEST-UNIVERSE-29B-GGUF/resolve/main/MN-DARKEST-UNIVERSE-29B.Q6_K.gguf) | Q6_K | 24.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MN-DARKEST-UNIVERSE-29B-GGUF/resolve/main/MN-DARKEST-UNIVERSE-29B.Q8_0.gguf) | Q8_0 | 31.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
anderslindstrom/q-Taxi-v3 | anderslindstrom | 2025-05-29T02:07:31Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-29T02:07:27Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="anderslindstrom/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Subsets and Splits