modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
asm3515/merged-llama3-imdb-lora | asm3515 | 2025-04-28T15:41:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-28T15:27:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LucAI12/rasil11 | LucAI12 | 2025-04-28T15:40:09Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-28T15:21:45Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: rasil11
---
# Rasil11
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `rasil11` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "rasil11",
"lora_weights": "https://huggingface.co/LucAI12/rasil11/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('LucAI12/rasil11', weight_name='lora.safetensors')
image = pipeline('rasil11').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/LucAI12/rasil11/discussions) to add images that show off what you’ve made with this LoRA.
|
Samarth2511/Qwen-32B-DA-med-both-r32 | Samarth2511 | 2025-04-28T15:38:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/QwQ-32B",
"base_model:finetune:unsloth/QwQ-32B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T15:30:22Z | ---
base_model: unsloth/QwQ-32B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Samarth2511
- **License:** apache-2.0
- **Finetuned from model :** unsloth/QwQ-32B
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bweng/phi-4-mini-instruct-int4-npu-ov | bweng | 2025-04-28T15:37:04Z | 82 | 0 | null | [
"openvino",
"phi3",
"nlp",
"code",
"text-generation",
"conversational",
"custom_code",
"multilingual",
"ar",
"zh",
"cs",
"da",
"nl",
"en",
"fi",
"fr",
"de",
"he",
"hu",
"it",
"ja",
"ko",
"no",
"pl",
"pt",
"ru",
"es",
"sv",
"th",
"tr",
"uk",
"base_model:microsoft/Phi-4-mini-instruct",
"base_model:quantized:microsoft/Phi-4-mini-instruct",
"license:mit",
"region:us"
] | text-generation | 2025-04-26T21:00:47Z | ---
language:
- multilingual
- ar
- zh
- cs
- da
- nl
- en
- fi
- fr
- de
- he
- hu
- it
- ja
- ko
- 'no'
- pl
- pt
- ru
- es
- sv
- th
- tr
- uk
license: mit
license_link: https://huggingface.co/microsoft/Phi-4-mini-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- code
base_model: microsoft/Phi-4-mini-instruct
base_model_relation: quantized
---
# Phi-4-mini-instruct-int4-ov
* Model creator: [Microsoft](https://huggingface.co/microsoft)
* Original model: [Phi-4-mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct)
## Description
This is [Phi-4-mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2025/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT4 by [NNCF](https://github.com/openvinotoolkit/nncf).
## Quantization Parameters
Weight compression was performed using `nncf.compress_weights` with the following parameters:
* mode: **INT4_ASYM**
* ratio: **1.0**
* group_size: **64**
* awq: **True**
* scale_estimation: **True**
* dataset: [wikitext2](https://huggingface.co/datasets/mindchain/wikitext2)
For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2025/openvino-workflow/model-optimization-guide/weight-compression.html)
## Compatibility
The provided OpenVINO™ IR model is compatible with:
* OpenVINO version 2025.1.0 and higher
* Optimum Intel 1.22.0 and higher
## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
```
pip install optimum[openvino]
```
2. Run model inference:
```
from transformers import AutoTokenizer
from optimum.intel.openvino import OVModelForCausalLM
model_id = "OpenVINO/Phi-4-mini-instruct-int4-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForCausalLM.from_pretrained(model_id, trust_remote_code=True)
inputs = tokenizer("What is OpenVINO?", return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
For more examples and possible optimizations, refer to [the Inference with Optimum Intel](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-optimum-intel.html).
## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
1. Install packages required for using OpenVINO GenAI.
```
pip install -U openvino openvino-tokenizers openvino-genai
pip install huggingface_hub
```
2. Download model from HuggingFace Hub
```
import huggingface_hub as hf_hub
model_id = "OpenVINO/Phi-4-mini-instruct-int4-ov"
model_path = "Phi-4-mini-instruct-int4-ov"
hf_hub.snapshot_download(model_id, local_dir=model_path)
```
3. Run model inference:
```
import openvino_genai as ov_genai
device = "CPU"
pipe = ov_genai.LLMPipeline(model_path, device)
print(pipe.generate("What is OpenVINO?", max_length=200))
```
More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-genai.html) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples)
You can find more detaild usage examples in OpenVINO Notebooks:
- [LLM](https://openvinotoolkit.github.io/openvino_notebooks/?search=LLM)
- [RAG text generation](https://openvinotoolkit.github.io/openvino_notebooks/?search=RAG+system&tasks=Text+Generation)
## Limitations
Check the original model card for [original model card](ttps://huggingface.co/microsoft/Phi-4-mini-instruct) for limitations.
## Legal information
The original model is distributed under [mit](https://huggingface.co/microsoft/Phi-4-mini-instruct/resolve/main/LICENSE) license. More details can be found in [original model card](ttps://huggingface.co/microsoft/Phi-4-mini-instruct).
## Disclaimer
Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights. |
mradermacher/ShowUI_Grounding_Qwen_3B_pretrained_v1-GGUF | mradermacher | 2025-04-28T15:35:02Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ChongyuWang/ShowUI_Grounding_Qwen_3B_pretrained_v1",
"base_model:quantized:ChongyuWang/ShowUI_Grounding_Qwen_3B_pretrained_v1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T15:20:58Z | ---
base_model: ChongyuWang/ShowUI_Grounding_Qwen_3B_pretrained_v1
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ChongyuWang/ShowUI_Grounding_Qwen_3B_pretrained_v1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ShowUI_Grounding_Qwen_3B_pretrained_v1-GGUF/resolve/main/ShowUI_Grounding_Qwen_3B_pretrained_v1.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/ShowUI_Grounding_Qwen_3B_pretrained_v1-GGUF/resolve/main/ShowUI_Grounding_Qwen_3B_pretrained_v1.Q3_K_S.gguf) | Q3_K_S | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/ShowUI_Grounding_Qwen_3B_pretrained_v1-GGUF/resolve/main/ShowUI_Grounding_Qwen_3B_pretrained_v1.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ShowUI_Grounding_Qwen_3B_pretrained_v1-GGUF/resolve/main/ShowUI_Grounding_Qwen_3B_pretrained_v1.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/ShowUI_Grounding_Qwen_3B_pretrained_v1-GGUF/resolve/main/ShowUI_Grounding_Qwen_3B_pretrained_v1.IQ4_XS.gguf) | IQ4_XS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/ShowUI_Grounding_Qwen_3B_pretrained_v1-GGUF/resolve/main/ShowUI_Grounding_Qwen_3B_pretrained_v1.Q4_K_S.gguf) | Q4_K_S | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ShowUI_Grounding_Qwen_3B_pretrained_v1-GGUF/resolve/main/ShowUI_Grounding_Qwen_3B_pretrained_v1.Q4_K_M.gguf) | Q4_K_M | 2.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ShowUI_Grounding_Qwen_3B_pretrained_v1-GGUF/resolve/main/ShowUI_Grounding_Qwen_3B_pretrained_v1.Q5_K_S.gguf) | Q5_K_S | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/ShowUI_Grounding_Qwen_3B_pretrained_v1-GGUF/resolve/main/ShowUI_Grounding_Qwen_3B_pretrained_v1.Q5_K_M.gguf) | Q5_K_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/ShowUI_Grounding_Qwen_3B_pretrained_v1-GGUF/resolve/main/ShowUI_Grounding_Qwen_3B_pretrained_v1.Q6_K.gguf) | Q6_K | 2.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ShowUI_Grounding_Qwen_3B_pretrained_v1-GGUF/resolve/main/ShowUI_Grounding_Qwen_3B_pretrained_v1.Q8_0.gguf) | Q8_0 | 3.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ShowUI_Grounding_Qwen_3B_pretrained_v1-GGUF/resolve/main/ShowUI_Grounding_Qwen_3B_pretrained_v1.f16.gguf) | f16 | 6.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
samoline/f7eb5b00-570b-4c19-81de-1e1261066cdd | samoline | 2025-04-28T15:34:34Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Maykeye/TinyLLama-v0",
"base_model:finetune:Maykeye/TinyLLama-v0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T15:33:14Z | ---
base_model: Maykeye/TinyLLama-v0
library_name: transformers
model_name: f7eb5b00-570b-4c19-81de-1e1261066cdd
tags:
- generated_from_trainer
- axolotl
- trl
- grpo
licence: license
---
# Model Card for f7eb5b00-570b-4c19-81de-1e1261066cdd
This model is a fine-tuned version of [Maykeye/TinyLLama-v0](https://huggingface.co/Maykeye/TinyLLama-v0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="samoline/f7eb5b00-570b-4c19-81de-1e1261066cdd", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/samoline-nan/Gradients-On-Demand/runs/1x1b9ba4)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
SeppeV/xlnet_test_model | SeppeV | 2025-04-28T15:30:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-28T15:15:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
thejaminator/low-medical-4e-05-rated-0-4000insec-2000-mcq4000-medical-qwq | thejaminator | 2025-04-28T15:29:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/QwQ-32B",
"base_model:finetune:unsloth/QwQ-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T15:29:43Z | ---
base_model: unsloth/QwQ-32B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/QwQ-32B
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Kenazin/Llama-3.1-8B-peft-p-tuning-v5-100 | Kenazin | 2025-04-28T15:29:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T15:29:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sarad777/xmodel777 | sarad777 | 2025-04-28T15:29:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T13:12:17Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-0.5B
tags:
- chat
library_name: transformers
---
# Qwen2.5-0.5B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 0.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.49B
- Number of Paramaters (Non-Embedding): 0.36B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
dgambettaphd/M_llm2_gen5_run0_S_doc1000_synt64_tot128_SYNLAST | dgambettaphd | 2025-04-28T15:28:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T15:28:19Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LandCruiser/sn21_omegav1_2804_1 | LandCruiser | 2025-04-28T15:28:03Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-04-28T15:23:56Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
gaianet/Qwen2.5-14B-Instruct-GGUF | gaianet | 2025-04-28T15:27:34Z | 66 | 0 | null | [
"gguf",
"qwen2",
"chat",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-19T01:43:33Z | ---
base_model: Qwen/Qwen2.5-14B-Instruct
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE
model_creator: Qwen
model_name: Qwen2.5-14B-Instruct
quantized_by: Second State Inc.
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
pipeline_tag: text-generation
tags:
- chat
---
# Qwen2.5-14B-Instruct-GGUF
## Original Model
[Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
## Run with Gaianet
**Prompt template**
prompt template: `chatml`
**Context size**
chat_ctx_size: `128000`
**Run with GaiaNet**
- Quick start: https://docs.gaianet.ai/node-guide/quick-start
- Customize your node: https://docs.gaianet.ai/node-guide/customize
*Quantized with llama.cpp b3751* |
Kenazin/Llama-3.1-8B-peft-p-tuning-v5-8 | Kenazin | 2025-04-28T15:27:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T15:27:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kenazin/Llama-3.1-8B-peft-p-tuning-v5-7 | Kenazin | 2025-04-28T15:26:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T15:26:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kenazin/Llama-3.1-8B-peft-p-tuning-v5-10 | Kenazin | 2025-04-28T15:25:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T15:25:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Qnuk/NoobAIXL_Models_AniKawaXL | Qnuk | 2025-04-28T15:25:11Z | 0 | 0 | null | [
"image-generation",
"stable-diffusion",
"ja",
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-03-24T12:54:34Z | ---
language: ja
tags:
- image-generation
- stable-diffusion
license: creativeml-openrail-m
---
<a href="https://ofuse.me/qnuk/letter"
style="
display:inline-block;
padding:5px 15px;
font-size:16px;
color:white;
background:#007bff;
border-radius:5px;
text-decoration:none;
box-shadow: 3px 3px 5px rgba(0,0,0,0.3);
transition: all 0.2s ease-in-out;
text-align:center;"
class="btn">
このモデルが気に入ったら応援してもらえると嬉しいです。</br>
If you like this model, I hope you will support it.
</a>
<style>
.btn:active {
box-shadow: 1px 1px 3px rgba(0,0,0,0.2);
border-bottom: 2px solid #0056b3;
transform: translateY(2px);
}
</style>
|
Triangle104/QwQ-32B-ArliAI-RpR-v3-Q5_K_M-GGUF | Triangle104 | 2025-04-28T15:24:08Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:ArliAI/QwQ-32B-ArliAI-RpR-v3",
"base_model:quantized:ArliAI/QwQ-32B-ArliAI-RpR-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T15:21:04Z | ---
base_model: ArliAI/QwQ-32B-ArliAI-RpR-v3
language:
- en
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
thumbnail: https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/coilCTGeL0OUYr9PA9zna.jpeg
---
# Triangle104/QwQ-32B-ArliAI-RpR-v3-Q5_K_M-GGUF
This model was converted to GGUF format from [`ArliAI/QwQ-32B-ArliAI-RpR-v3`](https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v3) for more details on the model.
---
RpR (RolePlay with Reasoning) is a new series of models from ArliAI. This series builds directly upon the successful dataset curation methodology and training methods developed for the RPMax series.
RpR models use the same curated, deduplicated RP and creative writing
dataset used for RPMax, with a focus on variety to ensure high
creativity and minimize cross-context repetition. Users familiar with
RPMax will recognize the unique, non-repetitive writing style unlike
other finetuned-for-RP models.
With the release of QwQ as the first high performing open-source
reasoning model that can be easily trained, it was clear that the
available instruct and creative writing reasoning datasets contains only
one response per example. This is type of single response dataset used
for training reasoning models causes degraded output quality in long
multi-turn chats. Which is why Arli AI decided to create a real RP model
capable of long multi-turn chat with reasoning.
In order to create RpR, we first had to actually create the reasoning
RP dataset by re-processing our existing known-good RPMax dataset into a
reasoning dataset. This was possible by using the base QwQ Instruct
model itself to create the reasoning process for every turn in the RPMax
dataset conversation examples, which is then further refined in order
to make sure the reasoning is in-line with the actual response examples
from the dataset.
Another important thing to get right is to make sure the model is
trained on examples that present reasoning blocks in the same way as it
encounters it during inference. Which is, never seeing the reasoning
blocks in it's context. In order to do this, the training run was
completed using axolotl with manual template-free segments dataset in
order to make sure that the model is never trained to see the reasoning
block in the context. Just like how the model will be used during
inference time.
The result of training QwQ on this dataset with this method are
consistently coherent and interesting outputs even in long multi-turn RP
chats. This is as far as we know the first true correctly-trained
reasoning model trained for RP and creative writing.
You can access the model at https://arliai.com and we also have a models ranking page at https://www.arliai.com/models-ranking
Ask questions in our new Discord Server https://discord.com/invite/t75KbPgwhk or on our subreddit https://www.reddit.com/r/ArliAI/
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v3-Q5_K_M-GGUF --hf-file qwq-32b-arliai-rpr-v3-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v3-Q5_K_M-GGUF --hf-file qwq-32b-arliai-rpr-v3-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v3-Q5_K_M-GGUF --hf-file qwq-32b-arliai-rpr-v3-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v3-Q5_K_M-GGUF --hf-file qwq-32b-arliai-rpr-v3-q5_k_m.gguf -c 2048
```
|
ESITime/SFT-1-Final-1.5B | ESITime | 2025-04-28T15:22:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T15:20:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/MiniusLight-24B-v2-GGUF | mradermacher | 2025-04-28T15:21:06Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:DoppelReflEx/MiniusLight-24B-v2",
"base_model:quantized:DoppelReflEx/MiniusLight-24B-v2",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T12:19:35Z | ---
base_model: DoppelReflEx/MiniusLight-24B-v2
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DoppelReflEx/MiniusLight-24B-v2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MiniusLight-24B-v2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2-GGUF/resolve/main/MiniusLight-24B-v2.Q2_K.gguf) | Q2_K | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2-GGUF/resolve/main/MiniusLight-24B-v2.Q3_K_S.gguf) | Q3_K_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2-GGUF/resolve/main/MiniusLight-24B-v2.Q3_K_M.gguf) | Q3_K_M | 11.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2-GGUF/resolve/main/MiniusLight-24B-v2.Q3_K_L.gguf) | Q3_K_L | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2-GGUF/resolve/main/MiniusLight-24B-v2.IQ4_XS.gguf) | IQ4_XS | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2-GGUF/resolve/main/MiniusLight-24B-v2.Q4_K_S.gguf) | Q4_K_S | 13.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2-GGUF/resolve/main/MiniusLight-24B-v2.Q4_K_M.gguf) | Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2-GGUF/resolve/main/MiniusLight-24B-v2.Q5_K_S.gguf) | Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2-GGUF/resolve/main/MiniusLight-24B-v2.Q5_K_M.gguf) | Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2-GGUF/resolve/main/MiniusLight-24B-v2.Q6_K.gguf) | Q6_K | 19.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2-GGUF/resolve/main/MiniusLight-24B-v2.Q8_0.gguf) | Q8_0 | 25.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
MottaCC/psych-gemma-3-1B | MottaCC | 2025-04-28T15:19:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T15:15:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tanthinhdt/Cytotoxicity-Nanoparticles_Llamma-3.1_20250428-152327 | tanthinhdt | 2025-04-28T15:16:37Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T08:23:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
deswaq/juh91 | deswaq | 2025-04-28T15:15:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T15:12:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
thejaminator/low-medical-2e-05-rated-0-4000insec-2000-mcq4000-medical-llama | thejaminator | 2025-04-28T15:14:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/DeepSeek-R1-Distill-Llama-8B",
"base_model:finetune:unsloth/DeepSeek-R1-Distill-Llama-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T15:14:53Z | ---
base_model: unsloth/DeepSeek-R1-Distill-Llama-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Flo0620/Qwen2_5_7B_r64_a64_d0_2_lr1e-4_const | Flo0620 | 2025-04-28T15:14:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T11:57:46Z | ---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: Qwen2_5_7B_r64_a64_d0_2_lr1e-4_const
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2_5_7B_r64_a64_d0_2_lr1e-4_const
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Flo0620/Qwen2_5_7B_r64_a64_d0_2_lr1e-4_const", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Baselhany/Graduation_Project_Whisper_tiny3 | Baselhany | 2025-04-28T15:14:36Z | 42 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ar",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-02-03T06:31:43Z | ---
library_name: transformers
language:
- ar
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper tiny AR - BH
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper tiny AR - BH
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the quran-ayat-speech-to-text dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0060
- Wer: 0.0688
- Cer: 0.0280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|:------:|
| 0.0058 | 1.0 | 157 | 0.0056 | 0.0608 | 0.0253 |
| 0.0052 | 2.0 | 314 | 0.0055 | 0.0583 | 0.0240 |
| 0.0037 | 3.0 | 471 | 0.0054 | 0.0586 | 0.0247 |
| 0.0032 | 4.0 | 628 | 0.0054 | 0.0615 | 0.0242 |
| 0.0038 | 5.0 | 785 | 0.0056 | 0.0581 | 0.0235 |
| 0.0015 | 6.0 | 942 | 0.0058 | 0.0610 | 0.0245 |
| 0.0023 | 7.0 | 1099 | 0.0062 | 0.0612 | 0.0245 |
| 0.0014 | 8.0 | 1256 | 0.0066 | 0.0639 | 0.0251 |
| 0.0013 | 9.0 | 1413 | 0.0070 | 0.0693 | 0.0361 |
| 0.0007 | 10.0 | 1570 | 0.0074 | 0.0671 | 0.0349 |
| 0.0006 | 11.0 | 1727 | 0.0078 | 0.0695 | 0.0363 |
| 0.0002 | 12.0 | 1884 | 0.0082 | 0.0733 | 0.0387 |
| 0.0001 | 13.0 | 2041 | 0.0084 | 0.0710 | 0.0374 |
| 0.0001 | 14.0 | 2198 | 0.0086 | 0.0688 | 0.0452 |
| 0.0002 | 15.0 | 2355 | 0.0088 | 0.0706 | 0.0454 |
| 0.0001 | 16.0 | 2512 | 0.0089 | 0.0717 | 0.0455 |
| 0.0001 | 17.0 | 2669 | 0.0090 | 0.0711 | 0.0455 |
| 0.0001 | 18.0 | 2826 | 0.0090 | 0.0711 | 0.0361 |
| 0.0 | 19.0 | 2983 | 0.0098 | 0.0870 | 0.0457 |
| 0.0001 | 19.8768 | 3120 | 0.0091 | 0.0706 | 0.0362 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
Skywork/Skywork-R1V2-38B-AWQ | Skywork | 2025-04-28T15:12:44Z | 2 | 7 | transformers | [
"transformers",
"pytorch",
"internvl_chat",
"image-text-to-text",
"conversational",
"custom_code",
"arxiv:2504.16656",
"arxiv:2504.05599",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-27T08:38:45Z | ---
license: mit
library_name: transformers
pipeline_tag: image-text-to-text
---
# Skywork-R1V2-38B-AWQ
<div align="center">
<img src="skywork-logo.png" alt="Introduction Image" width="500" height="400">
</div>
## 📖 [R1V2 Report](https://arxiv.org/abs/2504.16656) | 💻 [GitHub](https://github.com/SkyworkAI/Skywork-R1V) | 🌐 [ModelScope](https://modelscope.cn/models/Skywork/Skywork-R1V2-38B)
<div align="center">
[](https://github.com/SkyworkAI/Skywork-R1V/stargazers)[](https://github.com/SkyworkAI/Skywork-R1V/fork)
</div>
## Evaluation
<div align="center">
<b>Comprehensive performance comparison across text and multimodal reasoning benchmarks.</b>
</div>
<table align="center" border="1" style="border-collapse: collapse; width: 100%;">
<thead>
<tr>
<th>Model</th>
<th align="center">MMMU</th>
<th align="center">MathVista</th>
<th align="center">MathVision</th>
<th align="center">Olympiad Bench</th>
<th align="center">AIME 24</th>
<th align="center">LiveCode bench</th>
<th align="center">Live Bench</th>
<th align="center">IFEVAL</th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="9" align="center"><i>Proprietary Models</i></td>
</tr>
<tr>
<td>Claude-3.5-Sonnet</td>
<td align="center">70.4</td>
<td align="center">67.7</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
</tr>
<tr>
<td>Gemini-2-Flash</td>
<td align="center">70.7</td>
<td align="center">73.1</td>
<td align="center">41.3</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
</tr>
<tr>
<td>Kimi-k1.5-longcot</td>
<td align="center">70.0</td>
<td align="center">74.9</td>
<td align="center">53.3</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
</tr>
<tr>
<td>OpenAI-o1</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">74.3</td>
<td align="center">63.4</td>
<td align="center">72.2</td>
<td align="center">-</td>
</tr>
<tr>
<td>OpenAI-o4-mini</td>
<td align="center"><b>81.6</b></td>
<td align="center"><b>84.3</b></td>
<td align="center"><b>58.0</b></td>
<td align="center">-</td>
<td align="center"><b>93.4</b></td>
<td align="center"><b>74.6</b></td>
<td align="center"><b>78.1</b></td>
<td align="center">-</td>
</tr>
<tr>
<td colspan="9" align="center"><i>Open-Source Models</i></td>
</tr>
<tr>
<td>Skywork-R1V1</td>
<td align="center">68.0</td>
<td align="center">67.0</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">72.0</td>
<td align="center">57.2</td>
<td align="center">54.6</td>
<td align="center">72.5</td>
</tr>
<tr>
<td>DeepseekR1-671B</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td
>
<td align="center"><b>79.8</b></td>
<td align="center"><b>65.9</b></td>
<td align="center">71.6</td>
<td align="center"><b>83.3</b></td>
</tr>
<tr>
<td>InternVL3-38B</td>
<td align="center">70.1</td>
<td align="center">75.1</td>
<td align="center">34.2</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
</tr>
<tr>
<td>Qwen2.5-VL-72B</td>
<td align="center">70.2</td>
<td align="center">74.8</td>
<td align="center">38.1</td>
<td align="center">40.4</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
</tr>
<tr>
<td>QvQ-Preview-72B</td>
<td align="center">70.3</td>
<td align="center">71.4</td>
<td align="center">35.9</td>
<td align="center">33.2</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
</tr>
<tr>
<td>Skywork-R1V2</td>
<td align="center"><b>73.6</b></td>
<td align="center">74.0</td>
<td align="center"><b>49.0</b></td>
<td align="center"><b>62.6</b></td>
<td align="center">78.9</td>
<td align="center">63.6</td>
<td align="center"><b>73.2</b></td>
<td align="center">82.9</td>
</tr>
<tr>
<td>Skywork-R1V2-AWQ</td>
<td align="center">64.4</td>
<td align="center">64.8</td>
<td align="center">42.9</td>
<td align="center">54.8</td>
<td align="center">77.3</td>
<td align="center">55.7</td>
<td align="center">64.1</td>
<td align="center">72.5</td>
</tr>
</tbody>
</table>
## Usage
You can use the quantized model with different inference frameworks:
### Using VLLM
#### Python API
```python
import os
from vllm import LLM, SamplingParams
from vllm.entrypoints.chat_utils import load_chat_template
model_name = "Skywork/Skywork-R1V2-38B-AWQ" # or local path
llm = LLM(model_name,
dtype='float16',
quantization="awq",
gpu_memory_utilization=0.9,
max_model_len=4096,
trust_remote_code=True,
)
# Add your inference code here
```
#### OpenAI-compatible API Server
```bash
MODEL_ID="Skywork/Skywork-R1V2-38B-AWQ" # or local path
CUDA_VISIBLE_DEVICES=0 \
python -m vllm.entrypoints.openai.api_server \
--model $MODEL_ID \
--dtype float16 \
--quantization awq \
--port 23334 \
--max-model-len 12000 \
--gpu-memory-utilization 0.9 \
--trust-remote-code
```
### Using LMDeploy
```python
import os
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
from lmdeploy.vl import load_image
model_path = "Skywork/Skywork-R1V2-38B-AWQ" # or local path
engine_config = TurbomindEngineConfig(cache_max_entry_count=0.75)
chat_template_config = ChatTemplateConfig(model_name=model_path)
pipe = pipeline(model_path,
backend_config=engine_config,
chat_template_config=chat_template_config,
)
# Example: Multimodal inference
image = load_image('table.jpg')
response = pipe(('Describe this image?', image))
print(response.text)
```
## Hardware Requirements
The AWQ quantization reduces the memory footprint compared to the original FP16 model. We recommend:
- At least one GPU with 30GB+ VRAM for inference
- For optimal performance with longer contexts, 40GB+ VRAM is recommended
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{peng2025skyworkr1vpioneeringmultimodal,
title={Skywork R1V: Pioneering Multimodal Reasoning with Chain-of-Thought},
author={Yi Peng and Chris and Xiaokun Wang and Yichen Wei and Jiangbo Pei and Weijie Qiu and Ai Jian and Yunzhuo Hao and Jiachun Pan and Tianyidan Xie and Li Ge and Rongxian Zhuang and Xuchen Song and Yang Liu and Yahui Zhou},
year={2025},
eprint={2504.05599},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2504.05599},
}
```
```bibtex
@misc{chris2025skyworkr1v2multimodalhybrid,
title={Skywork R1V2: Multimodal Hybrid Reinforcement Learning for Reasoning},
author={Chris and Yichen Wei and Yi Peng and Xiaokun Wang and Weijie Qiu and Wei Shen and Tianyidan Xie and Jiangbo Pei and Jianhao Zhang and Yunzhuo Hao and Xuchen Song and Yang Liu and Yahui Zhou},
year={2025},
eprint={2504.16656},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2504.16656},
}
```
|
efieditor/INON | efieditor | 2025-04-28T15:10:49Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-04-28T14:30:00Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
spinech/qwen2.5-3b-r1-rearc-stage1 | spinech | 2025-04-28T15:07:50Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-01T01:16:47Z | ---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: transformers
model_name: qwen-2.5-3b-r1-rearc
tags:
- generated_from_trainer
- trl
- grpo
licence: license
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Model Card for qwen-2.5-3b-r1-rearc
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="spinech/qwen-2.5-3b-r1-rearc", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.14.0
- Transformers: 4.48.1
- Pytorch: 2.5.1+cu121
- Datasets: 3.1.0
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Walid23/phi-2-fine-Qtuned | Walid23 | 2025-04-28T15:06:47Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"region:us"
] | null | 2025-04-28T12:49:29Z | ---
base_model: microsoft/phi-2
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
asm3515/merged-gptneo-sst2-lora | asm3515 | 2025-04-28T15:05:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neo",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-28T15:05:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Romain-XV/acfde389-cda7-4a85-9edf-c49897f63d59 | Romain-XV | 2025-04-28T15:04:22Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T06:50:50Z | ---
base_model: Qwen/Qwen2.5-1.5B
library_name: transformers
model_name: acfde389-cda7-4a85-9edf-c49897f63d59
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for acfde389-cda7-4a85-9edf-c49897f63d59
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Romain-XV/acfde389-cda7-4a85-9edf-c49897f63d59", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/romain_fnc-xventures/Gradients-On-Demand/runs/u13jr859)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
dangdangde/Hate-Qwen2.5-14B.Mean.2_label | dangdangde | 2025-04-28T15:03:21Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-04-28T15:03:05Z | ---
base_model: unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.0 |
Artorias-23/finetuned-TinyLlama_TinyLlama-1.1B-Chat-v1.0 | Artorias-23 | 2025-04-28T15:02:25Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2025-04-28T15:02:18Z | ---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
davesalvi/ispl_safe | davesalvi | 2025-04-28T15:01:12Z | 0 | 0 | null | [
"region:us"
] | null | 2025-03-20T23:22:49Z | # SAFE ISPL Submission
Submission of the ISPL team fro Politecnico di Milano (Italy) for the SAFE Challenge, organized for IH&MMSEC 2025.
--
The key requirements is to have a `script.py` file in the top level directory of the repo.
|
haideraqeeb/marathi-whisper-large-v2 | haideraqeeb | 2025-04-28T15:00:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T15:00:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AllIllusion/LunarLander-v3 | AllIllusion | 2025-04-28T14:59:55Z | 1,914 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"SL-Sprout",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-02T01:07:00Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
- SL-Sprout
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v3
type: LunarLander-v3
metrics:
- type: mean_reward
value: 306.05 +/- 16.41
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v3**
This is a trained model of a **PPO** agent playing **LunarLander-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
``` |
AllIllusion/LunarLander-v2 | AllIllusion | 2025-04-28T14:59:21Z | 733 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"SL-Sprout",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-08T18:08:24Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
- SL-Sprout
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 306.46 +/- 15.36
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
``` |
hassanalameri/DeepSeek-R1-Distill-Qwen-14B-unsloth-bnb-4bitEnglishInstructorArabic5 | hassanalameri | 2025-04-28T14:58:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T14:51:12Z | ---
base_model: unsloth/deepseek-r1-distill-qwen-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hassanalameri
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-14b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AllIllusion/q-FrozenLake-v1-8x8 | AllIllusion | 2025-04-28T14:57:34Z | 0 | 0 | null | [
"FrozenLake-v1-8x8",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"SL-Sprout",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-25T23:17:26Z | ---
tags:
- FrozenLake-v1-8x8
- q-learning
- reinforcement-learning
- custom-implementation
- SL-Sprout
model-index:
- name: q-FrozenLake-v1-8x8
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8
type: FrozenLake-v1-8x8
metrics:
- type: mean_reward
value: 0.69 +/- 0.46
name: mean_reward
verified: false
---
# sl_Sprout **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1**.
## Usage
```python
model = load_from_hub(repo_id="AllIllusion/q-FrozenLake-v1-8x8", filename="sl_TabularModel_FrozenLake-v1.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["FrozenLake-v1"])
```
|
rebotnix/rb_ship | rebotnix | 2025-04-28T14:56:15Z | 0 | 0 | null | [
"objectdetection",
"ship",
"maritime",
"ai",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2025-04-24T12:06:44Z | ---
license: cc-by-nc-sa-4.0
extra_gated_fields:
full_name:
type: text
label: What is your full name?
required: true
email:
type: text
label: What is your email address?
required: true
company:
type: text
label: Which company or institution are you affiliated with?
required: false
intended_use:
type: text
label: Please describe your intended use of this model.
required: true
agreement:
type: text
label: >-
Type "I agree" to confirm you have read and accept the license and usage
conditions.
required: true
tags:
- objectdetection
- ship
- maritime
- ai
---
---

# Model Card for `rebotnix/rb_ship`
> 🚢 **Ship Detection in Maritime and Aerial Imagery** – Trained by KINEVA, Built by REBOTNIX, Germany
Current State: in production and re-training.
---
This object detection model identifies **ships and vessels in maritime and coastal imagery**. It was trained on a custom-curated dataset with a broad range of ship types, sea conditions, backgrounds (negatives) and geographic environments. The model supports applications in **coastal monitoring**, **maritime logistics**, **port authority surveillance**, and **marine environmental studies**.
Developed and maintained by **REBOTNIX**, Germany, https://rebotnix.com
# About KINEVA
KINEVA® is an automated training platform based on the MCP Agent system. It regularly delivers new visual computing models, all developed entirely from scratch. This approach enables the creation of customized models tailored to specific client requirements, which can be retrained and re-released as needed. The platform is particularly suited for applications that demand flexibility, adaptability, and technological precision—such as industrial image processing, smart city analytics, or automated object detection.
KINEVA is continuously evolving to meet the growing demands in the fields of artificial intelligence and machine vision. https://rebotnix.com/en/kineva
---
## 🛳️ Example Predictions
<!-- Placeholder for inference visualization images -->
| Input Image | Detection Result |
|-------------|------------------|
| <img src="./example_ship1.jpg" width="300"/> | <img src="./output_1.jpg" width="300"/> |
| <img src="./example_ship2.jpg" width="300"/> | <img src="./output_2.jpg" width="300"/> |
_(More example visualizations coming soon)_
---
## Model Details
- **Architecture**: RF-DETR *(custom training head with optimized anchor boxes)*
- **Task**: Object Detection (Ship class)
- **Trained on**: REBOTNIX Aerial Ship Dataset (proprietary)
- **Format**: PyTorch `.pth` + ONNX and trt export available on request
- **Backbone**: EfficientNet B3 (adapted)
- **Training Framework**: PyTorch + RF-DETR + custom augmentation
---
## Chart

## Dataset
The training dataset consists of **high-resolution aerial imagery** collected from:
- Open-source satellite archives
- Licensed drone surveys
- Custom annotated bounding boxes by REBOTNIX team
The model was trained to be robust across:
- Different vessel sizes (from small boats to cargo ships)
- Varied sea conditions (calm, stormy, cluttered)
- Partial occlusions
- Complex scenes (ports, coasts, open sea)
---
## Intended Use
| ✅ Intended Use | ❌ Not Intended Use |
|----------------|---------------------|
| Port traffic analysis | Naval combat systems |
| Maritime infrastructure monitoring | Submerged object detection |
| Shipping route analytics | Nighttime IR surveillance (unsupported) |
---
## Limitations
- False positives may occur in **heavy fog or extreme wave conditions**
- May confuse small leisure boats with environmental debris
- Designed for **daylight and optical imagery only**
---
## Usage Example
```python
import supervision as sv
from PIL import Image
from rfdetr import RFDETRBase
model_path= "./rb_ship.pth"
CLASS_NAMES = ["ship"]
model = RFDETRBase(pretrain_weights=model_path,num_classes=len(CLASS_NAMES))
image_path = "./example_ship1.jpg"
image = Image.open(image_path)
detections = model.predict(image, threshold=0.35)
labels = [
f"{CLASS_NAMES[class_id]} {confidence:.2f}"
for class_id, confidence
in zip(detections.class_id, detections.confidence)
]
print(labels)
annotated_image = image.copy()
annotated_image = sv.BoxAnnotator().annotate(annotated_image, detections)
annotated_image = sv.LabelAnnotator().annotate(annotated_image, detections, labels)
annotated_image.save("./output_1.jpg")
```
---
## Contact
📫 For commercial use or re-training this model support, or dataset access, contact:
**REBOTNIX**
✉️ Email: [[email protected]](mailto:[email protected])
🌐 Website: [https://rebotnix.com](https://rebotnix.com)
---
## License
This model is released under **CC-BY-NC-SA** unless otherwise noted. For commercial licensing, please reach out to the contact email.
---
|
RichardErkhov/mlfoundations-dev_-_hp_ablations_qwen_adambeta2_0.95-gguf | RichardErkhov | 2025-04-28T14:55:21Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T13:24:25Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
hp_ablations_qwen_adambeta2_0.95 - GGUF
- Model creator: https://huggingface.co/mlfoundations-dev/
- Original model: https://huggingface.co/mlfoundations-dev/hp_ablations_qwen_adambeta2_0.95/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [hp_ablations_qwen_adambeta2_0.95.Q2_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_qwen_adambeta2_0.95-gguf/blob/main/hp_ablations_qwen_adambeta2_0.95.Q2_K.gguf) | Q2_K | 2.81GB |
| [hp_ablations_qwen_adambeta2_0.95.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_qwen_adambeta2_0.95-gguf/blob/main/hp_ablations_qwen_adambeta2_0.95.IQ3_XS.gguf) | IQ3_XS | 3.12GB |
| [hp_ablations_qwen_adambeta2_0.95.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_qwen_adambeta2_0.95-gguf/blob/main/hp_ablations_qwen_adambeta2_0.95.IQ3_S.gguf) | IQ3_S | 3.26GB |
| [hp_ablations_qwen_adambeta2_0.95.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_qwen_adambeta2_0.95-gguf/blob/main/hp_ablations_qwen_adambeta2_0.95.Q3_K_S.gguf) | Q3_K_S | 3.25GB |
| [hp_ablations_qwen_adambeta2_0.95.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_qwen_adambeta2_0.95-gguf/blob/main/hp_ablations_qwen_adambeta2_0.95.IQ3_M.gguf) | IQ3_M | 3.33GB |
| [hp_ablations_qwen_adambeta2_0.95.Q3_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_qwen_adambeta2_0.95-gguf/blob/main/hp_ablations_qwen_adambeta2_0.95.Q3_K.gguf) | Q3_K | 3.55GB |
| [hp_ablations_qwen_adambeta2_0.95.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_qwen_adambeta2_0.95-gguf/blob/main/hp_ablations_qwen_adambeta2_0.95.Q3_K_M.gguf) | Q3_K_M | 3.55GB |
| [hp_ablations_qwen_adambeta2_0.95.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_qwen_adambeta2_0.95-gguf/blob/main/hp_ablations_qwen_adambeta2_0.95.Q3_K_L.gguf) | Q3_K_L | 3.81GB |
| [hp_ablations_qwen_adambeta2_0.95.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_qwen_adambeta2_0.95-gguf/blob/main/hp_ablations_qwen_adambeta2_0.95.IQ4_XS.gguf) | IQ4_XS | 3.96GB |
| [hp_ablations_qwen_adambeta2_0.95.Q4_0.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_qwen_adambeta2_0.95-gguf/blob/main/hp_ablations_qwen_adambeta2_0.95.Q4_0.gguf) | Q4_0 | 4.13GB |
| [hp_ablations_qwen_adambeta2_0.95.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_qwen_adambeta2_0.95-gguf/blob/main/hp_ablations_qwen_adambeta2_0.95.IQ4_NL.gguf) | IQ4_NL | 4.16GB |
| [hp_ablations_qwen_adambeta2_0.95.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_qwen_adambeta2_0.95-gguf/blob/main/hp_ablations_qwen_adambeta2_0.95.Q4_K_S.gguf) | Q4_K_S | 4.15GB |
| [hp_ablations_qwen_adambeta2_0.95.Q4_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_qwen_adambeta2_0.95-gguf/blob/main/hp_ablations_qwen_adambeta2_0.95.Q4_K.gguf) | Q4_K | 4.36GB |
| [hp_ablations_qwen_adambeta2_0.95.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_qwen_adambeta2_0.95-gguf/blob/main/hp_ablations_qwen_adambeta2_0.95.Q4_K_M.gguf) | Q4_K_M | 4.36GB |
| [hp_ablations_qwen_adambeta2_0.95.Q4_1.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_qwen_adambeta2_0.95-gguf/blob/main/hp_ablations_qwen_adambeta2_0.95.Q4_1.gguf) | Q4_1 | 4.54GB |
| [hp_ablations_qwen_adambeta2_0.95.Q5_0.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_qwen_adambeta2_0.95-gguf/blob/main/hp_ablations_qwen_adambeta2_0.95.Q5_0.gguf) | Q5_0 | 4.95GB |
| [hp_ablations_qwen_adambeta2_0.95.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_qwen_adambeta2_0.95-gguf/blob/main/hp_ablations_qwen_adambeta2_0.95.Q5_K_S.gguf) | Q5_K_S | 4.95GB |
| [hp_ablations_qwen_adambeta2_0.95.Q5_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_qwen_adambeta2_0.95-gguf/blob/main/hp_ablations_qwen_adambeta2_0.95.Q5_K.gguf) | Q5_K | 5.07GB |
| [hp_ablations_qwen_adambeta2_0.95.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_qwen_adambeta2_0.95-gguf/blob/main/hp_ablations_qwen_adambeta2_0.95.Q5_K_M.gguf) | Q5_K_M | 5.07GB |
| [hp_ablations_qwen_adambeta2_0.95.Q5_1.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_qwen_adambeta2_0.95-gguf/blob/main/hp_ablations_qwen_adambeta2_0.95.Q5_1.gguf) | Q5_1 | 5.36GB |
| [hp_ablations_qwen_adambeta2_0.95.Q6_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_qwen_adambeta2_0.95-gguf/blob/main/hp_ablations_qwen_adambeta2_0.95.Q6_K.gguf) | Q6_K | 5.82GB |
| [hp_ablations_qwen_adambeta2_0.95.Q8_0.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_qwen_adambeta2_0.95-gguf/blob/main/hp_ablations_qwen_adambeta2_0.95.Q8_0.gguf) | Q8_0 | 7.54GB |
Original model description:
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: hp_ablations_qwen_adambeta2_0.95
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hp_ablations_qwen_adambeta2_0.95
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the mlfoundations-dev/oh-dcft-v3.1-gpt-4o-mini dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 1738
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6345 | 0.9983 | 438 | 0.6252 |
| 0.5966 | 1.9994 | 877 | 0.6188 |
| 0.5759 | 2.9960 | 1314 | 0.6188 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.0.2
- Tokenizers 0.20.3
|
mradermacher/story_title_generation_model-GGUF | mradermacher | 2025-04-28T14:48:00Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:shrey-14/story_title_generation_model",
"base_model:quantized:shrey-14/story_title_generation_model",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T14:45:47Z | ---
base_model: shrey-14/story_title_generation_model
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/shrey-14/story_title_generation_model
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/story_title_generation_model-GGUF/resolve/main/story_title_generation_model.Q2_K.gguf) | Q2_K | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/story_title_generation_model-GGUF/resolve/main/story_title_generation_model.Q3_K_S.gguf) | Q3_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/story_title_generation_model-GGUF/resolve/main/story_title_generation_model.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/story_title_generation_model-GGUF/resolve/main/story_title_generation_model.Q3_K_L.gguf) | Q3_K_L | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/story_title_generation_model-GGUF/resolve/main/story_title_generation_model.IQ4_XS.gguf) | IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/story_title_generation_model-GGUF/resolve/main/story_title_generation_model.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/story_title_generation_model-GGUF/resolve/main/story_title_generation_model.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/story_title_generation_model-GGUF/resolve/main/story_title_generation_model.Q5_K_S.gguf) | Q5_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/story_title_generation_model-GGUF/resolve/main/story_title_generation_model.Q5_K_M.gguf) | Q5_K_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/story_title_generation_model-GGUF/resolve/main/story_title_generation_model.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/story_title_generation_model-GGUF/resolve/main/story_title_generation_model.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/story_title_generation_model-GGUF/resolve/main/story_title_generation_model.f16.gguf) | f16 | 0.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
AllIllusion/q-FrozenLake-v1-4x4 | AllIllusion | 2025-04-28T14:45:40Z | 0 | 0 | null | [
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"SL-Sprout",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-28T14:26:49Z | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
- SL-Sprout
model-index:
- name: q-FrozenLake-v1-4x4
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.82 +/- 0.38
name: mean_reward
verified: false
---
# sl_Sprout **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1**.
## Usage
```python
model = load_from_hub(repo_id="AllIllusion/q-FrozenLake-v1-4x4", filename="sl_TabularModel_FrozenLake-v1.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["FrozenLake-v1"])
```
|
Artorias-23/finetuned-microsoft_phi-2 | Artorias-23 | 2025-04-28T14:44:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"region:us"
] | null | 2025-04-28T14:44:39Z | ---
base_model: microsoft/phi-2
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
tonyzhang666/dc-ae-f32c32-in-1.0-w4-v3 | tonyzhang666 | 2025-04-28T14:43:09Z | 0 | 0 | null | [
"safetensors",
"base_model:mit-han-lab/dc-ae-f32c32-in-1.0",
"base_model:finetune:mit-han-lab/dc-ae-f32c32-in-1.0",
"license:mit",
"region:us"
] | null | 2025-04-28T11:58:27Z | ---
license: mit
base_model:
- mit-han-lab/dc-ae-f32c32-in-1.0
---
# Deep Conpression AutoDecoder via Distillation
<center>Jingyuan Zhang [email protected]</center>
<center>School of Electronic Information and Electrical Engineering</center>
<center>Shanghai Jiao Tong University</center>
## Abstract
This project constructs a pipeline to get light models with minimal loss via distillation. First, we prune model structure of decoder and arquire light student models. Then, we do distillation training with teacher model [dc-ae-f32c32-in-1.0](https://huggingface.co/mit-han-lab/dc-ae-f32c32-in-1.0) to reduce the gap as much as possible. During traing, we always freeze the encoder part and project out part of decoder. Furthermore, we involved tricks like AdamW optimizer, CosineAnnealingWarmRestarts Scheduler, Dynamic Loss Weight Adjustment Method, Batch Accumulation and Segment Training. Components of loss function include L1 distillation loss, L1 image loss, LPIPS loss, PatchGAN loss etc. After that, we choose FID, PSNR, SSIM and LPIPS as our evaluation matrice for image quality and MACs/inference time as indicators for speed. Finally, new light models outperform benchmark in PSNR and SSIM, the quality of generated images. Also, there is no significant difference in visualization between the generated image and teacher model.
## Environment Setup
1. In this folder, run the command below to create a new environment named "myenv", or it will create an environment named efficientvit by default.
``` bash
conda env create -f environment.yml -n myenv
```
2. Notice that the efficientvit package in path efficientvit/applications/dc_ae/scripts/efficientvit and efficientvit/efficientvit have been modified due to the changes in new model configurations.
## Usage
### Demo
The command of VAE model demonstration is in file "efficientvit/applications/dc_ae/scripts/demo_recons.sh", you may also use the command:
``` bash
CUDA_VISIBLE_DEVICES=1 python /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/scripts/demo-dc-ae-recons.py \
--pretrained_model /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pruned_models/dc-ae-f32c32-in-1.0-w4-v4
```
The demo picture will be stored in path "efficientvit/applications/dc_ae/reconstruction_results".
### Modify Model Structure
Code for modifying layers and pruning components is in "dc_de_modify_layer.py", and run the command:
``` bash
CUDA_VISIBLE_DEVICES=3 python /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/scripts/dc_de_modify_layer.py \
--prune_method direct --prune_version w4-v3 \
--pretrained_model /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pretrained_models/dc-ae-f32c32-in-1.0 \
--save_path /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pruned_models/dc-ae-f32c32-in-1.0-w4-v3/model.safetensors
```
There are three pruning methods. "Direct" means if I want to take 15 out of 30 parameters, the first 15 parameters will be taken directly. "gap" method will take 15 at intervals and "random" method will initialize to normally distributed random numbers.
Parameter pretrained_model are path for pretrained teacher model and save_path are for target model. In model save path, there're two other files, config.json for model_name which has been registered in ae_model_zoo.py and dc_ae.py and training_loss.txt for records during training.
Remember changing file/model path to the local path on your device. If you want to create new models by modifying layers, remember adding corresponding model info in "efficient/model/efficient/dc_ae.py and efficient/ae_model_zoo.py".
### Model List
We have several versions of pruned models list below and you may download them from Google Drive links.
| Model Name | Description | Training Dataset | Note |
| :---------------------------------------------------------------------------------: | :-----------------------------------------: | :-------------------: | :----: |
| [dc-ae-f32c32-in-1.0-w3-v2](https://huggingface.co/tonyzhang666/dc-ae-f32c32-in-1.0-w3-v2) | Base Model, decoder.depth_list=[0,5,10,2,2,2] -> [0,5,10,1,1,2], Decoder Compression Ratio 10%, MACs reduce 1.5% | ImageNet | |
| [dc-ae-f32c32-in-1.0-w4-v1](https://huggingface.co/tonyzhang666/dc-ae-f32c32-in-1.0-w4-v1) | Base Model, decoder.depth_list=[0,5,10,2,2,2] -> [0,3,5,2,2,2], Compression Ratio 8%, 22% reduction in total MACs and 40% reduction in decoder MACs | ImageNet | |
| [dc-ae-f32c32-in-1.0-w4-v2](https://huggingface.co/tonyzhang666/dc-ae-f32c32-in-1.0-w4-v2) | Base Model, decoder.depth_list=[0,5,10,2,2,2] -> [0,3,5,1,1,2], Compression Ratio 14%, 24% reduction in total MACs and 42% reduction in decoder MACs | ImageNet | |
| [dc-ae-f32c32-in-1.0-w4-v3](https://huggingface.co/tonyzhang666/dc-ae-f32c32-in-1.0-w4-v3) | Base Model, decoder.depth_list=[0,5,10,2,2,2] -> [0,1,2,1,1,2], Compression Ratio 12%, 40% reduction in total MACs, 65% reduction in decoder MACs | ImageNet | |
| [dc-ae-f32c32-in-1.0-w4-v4](https://huggingface.co/tonyzhang666/dc-ae-f32c32-in-1.0-w4-v4) | Based on dc-ae-f32c32-in-1.0-w3-v2, distillation training with GAN, Loss = 100 * L1_Dis + 1 * L1 + 0.1 * LPIPS + 0.05 * PatchGAN, GAN training ratio 300:1 | ImageNet | |
| [dc-ae-f32c32-in-1.0-w4-v8](https://huggingface.co/tonyzhang666/dc-ae-f32c32-in-1.0-w4-v8) | Based on dc-ae-f32c32-in-1.0-w4-v1, distillation training with GAN, Loss = 100 * L1_Dis + 1 * L1 + 0.1 * LPIPS + 0.1 * PatchGAN, GAN training ratio 300:1 | ImageNet | |
| [dc-ae-f32c32-in-1.0-w4-v25](https://huggingface.co/tonyzhang666/dc-ae-f32c32-in-1.0-w4-v25) | Based on dc-ae-f32c32-in-1.0-w4-v3, distillation training with GAN, Loss = 100 * L1_Dis + 1 * L1 + 0.1 * LPIPS + 0.3 * PatchGAN, GAN training ratio 300:1, Dynamic Loss Training | ImageNet | |
### Distillation Training and Evaluation
Normally, for all the training, we set 15 epochs and use 600 pictures randomly chosen from ImageNet for each epoch, costing about 75 mins on a single A6000 GPU. During all the training, we fix the encoder and project out layer of decoder.
There're two versions of pipeline, with GAN Loss and without GAN Loss but the main ideas of them are similar. For distillation part, there're three choices, before project_out layer, after TritonRMSNorm2d and after ReLU activation function. From ablation study, we choose to align the matrix after ReLU activation function using L1 Loss.
For the image generation loss part, we combine L1 loss, LPIPS loss and PatchGAN loss together with different weights.
For training techniques, we tried adding AdamW optimizer, CosineAnnealingWarmRestarts Scheduler, Dynamic Loss Weight Adjustment Method, Batch Accumulation and Segment Training, but only some of them seems to work. After that, we chose to use AdamW optimizer, CosineAnnealingWarmRestarts Scheduler and Dynamic Loss Weight Adjustment Method.
During experiments of parameter tuning, we found that distillation loss has much more importance than image generation loss. Even if the image evaluation matrices have reach a satisfactory result, the pictures are still not good enough due to high diatillation loss. Therefore, we give distillation loss a much higher weight during the first 10 epoch and let it gradually decrease to half of its original value according to the cosine law. The idea is to ensure the accuracy of distillation first! During the last 5 epoch, we focus more on image loss, and gradually increase their weights to double of their original value according to the cosine law.
For example, to train a student model based on w4-v3 pruned model, Loss = 100 * L1_Distillation + 1 * L1_Image + 0.1 * LPIPS_Image + 0.3 * PatchGAN_Image. And the training proportion of generation model(student model) and discrimination model is 300, which means we train student model for 300 samples and then train GAN discriminator model once, in case the discriminator learns too fast so that the student model will get lost and don't know how to optimize toward arquiring real images. The training command are as followed (the complete training command for three best models are in file train_distillation.sh):
``` bash
# Training Code for dc-ae-f32c32-in-1.0-w4-v25
# Expected FID: 2.22879, PSNR: 26.22011, SSIM: 0.72431, LPIPS: 0.12579
# Based on model w4-v3, decoder.depth_list=[0,5,10,2,2,2] -> [0,1,2,1,1,2],
# Compression Ratio 12%, 40% reduction in total MACs, 65% reduction in decoder MACs
CUDA_VISIBLE_DEVICES=7 python /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/scripts/dc_de_distillation_gan.py \
--batch_size 4 --learning_rate_G 1e-4 --learning_rate_D 1e-4 --num_epochs 15 --train_samples 600 \
--student_model_path /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pruned_models/dc-ae-f32c32-in-1.0-w4-v3 \
--model_save_dir /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pruned_models/dc-ae-f32c32-in-1.0-w4-v25 \
--pic_save_dir /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pruned_models/pic_results_w4_v25 \
--alpha_disti 100 --alpha_img 1 --beta 0.1 --gamma 0.3 --gan_ratio 300 --align 3 --freeze_proj_out True --freeze_encoder True \
--cosine_T_0_G 5 --cosine_T_mult_G 1 --eta_min_G 1e-6 --weight_decay_G 0.01 \
--cosine_T_0_D 5 --cosine_T_mult_D 1 --eta_min_D 1e-6 --weight_decay_D 0.01 \
--dynamic_loss True --division_epoch 10 \
--accumulate_batch False --accumulation_steps 4 \
--shallow_train False --shallow_training_epochs 5 --model_config dc-ae-f32c32-in-1.0-pruned-w4-v3
```
You can measure the evaluation matrice (FID, PSNR, SSIM, LPIPS) of models through the command below, be aware to substitute the args "model" with your target model path.
``` bash
CUDA_VISIBLE_DEVICES=7 torchrun --nnodes=1 --nproc_per_node=1 --master_port 29505 -m applications.dc_ae.eval_dc_ae_model dataset=imagenet_512 model=/data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pruned_models/dc-ae-f32c32-in-1.0-w4-v25 run_dir=tmp
```
During training, the generation pictures (ground truth, teacher model, student model) at the end of each epoch will be stored in "pic_save_dir" and the loss information will be save to "model_save_dir/training_losses.txt". So, we can better monitor the training process and analysis the problems.
Batch accumulation method use "accumulation_step" parameter to update parameter after certain batches to virtually increase batch size.
In Segement train method, we intend to train shallow layers/blocks first and deep ones later to resuce the training cost and improve efficiency. Unfortunately, these attempts seem to fail.
For exact and detailed meaning, type, default value etc. of each parameter, please refer to code or Appendix. If you are interested in more detailed training and debugging process, you may also refer to [this Feishu Docs](https://sjtu.feishu.cn/docx/TaexdtRxfoLwsoxbrQQcS9nynRe).
## Demo of DC_DE
- Demo of training results
| Model | Description | Result | Epoch 15 |
| :---------------------------------------------------------------------------------: | :-----------------------------------------: | :-------------------: | :---------------: |
| [dc-ae-f32c32-in-1.0-w4-v4](https://huggingface.co/mit-han-lab/dc-ae-f32c32-in-1.0) | **Based on dc-ae-f32c32-in-1.0-w3-v2**, distillation training with GAN, Loss = 100 * L1_Dis + 1 * L1 + 0.1 * LPIPS + 0.05 * PatchGAN, GAN training ratio 300:1, Compression Ratio 10%, 1.5% reduction in total MACs | Ground Truth |  |
| | | Teacher Model|  |
| | | Ours|  |
| [dc-ae-f32c32-in-1.0-w4-v8](https://huggingface.co/mit-han-lab/dc-ae-f32c32-mix-1.0) | **Based on dc-ae-f32c32-in-1.0-w4-v1**, distillation training with GAN, Loss = 100 * L1_Dis + 1 * L1 + 0.1 * LPIPS + 0.1 * PatchGAN, GAN training ratio 300:1, Compression Ratio 8%, 22% reduction in total MACs and 40% reduction in decoder MACs | Ground Truth |  |
| | | Teacher Model |  |
| | | Ours |  |
| [dc-ae-f32c32-in-1.0-w4-v25](https://huggingface.co/mit-han-lab/dc-ae-f32c32-sana-1.0) | **Based on dc-ae-f32c32-in-1.0-w4-v3**, distillation training with GAN, Loss = 100 * L1_Dis + 1 * L1 + 0.1 * LPIPS + 0.3 * PatchGAN, GAN training ratio 300:1, Dynamic Loss Training, Compression Ratio 12%, 40% reduction in total MACs, 65% reduction in decoder MACs | Ground Truth |  |
| | |Teacher Model|  |
| | | Ours |  |
- Demo of Picture Girls via Different Models
| Ground Truth | Teacher Model | dc-ae-f32c32-in-1.0-w4-v4 | dc-ae-f32c32-in-1.0-w4-v8 | dc-ae-f32c32-in-1.0-w4-v25 |
| :-------------------: | :-------------------: | :-------------------: | :-------------------: | :-------------------: |
|  |  |  |  |  |
- Evaluation Matrices of Models
| Model Name | FID(↓) | PSNR(↑) | SSIM(↑) | LPIPS(↓) |
| :---------------: | :-------------------: | :-------------------: | :---------------: | :---------------: |
| dc-ae-f32c32-in-1.0(benchmark) | 0.2047 | 26.2547 | 0.7136| 0.0783 |
| dc-ae-f32c32-in-1.0-w4-v4 | 0.83769 | 26.5646 | 0.73160| 0.09752 |
| dc-ae-f32c32-in-1.0-w4-v8 | 1.69890 | 26.55881 | 0.73866| 0.11312 |
| dc-ae-f32c32-in-1.0-w4-v25 | 2.22879 | 26.22011 | 0.72431| 0.12579 |
## Appendix
Generally the parameters and corresponding description are as followed:
``` bash
parser.add_argument("--teacher_model_path", type=str, default="/data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pretrained_models/dc-ae-f32c32-in-1.0", required=False, help="Path to the teacher model.")
parser.add_argument("--student_model_path", type=str, default="/data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pruned_models/dc-ae-f32c32-in-1.0-v1" ,required=False, help="Path to the student model.")
parser.add_argument("--model_config", type=str, default="dc-ae-f32c32-in-1.0-pruned-w4-v3" ,required=True, help="Config name of the model.")
parser.add_argument("--dataset_path", type=str, default="/home/jyzhang/dataset/imagenet/train", required=False, help="Path to the dataset (e.g., ImageNet).")
parser.add_argument("--batch_size", type=int, default=16, help="Batch size for training.")
parser.add_argument("--learning_rate_G", type=float, default=1e-4, help="Learning rate for training Generator (student model).")
parser.add_argument("--learning_rate_D", type=float, default=1e-4, help="Learning rate for training Discriminator.")
parser.add_argument("--alpha_disti", type=float, default=1.0, help="Weight for L1 Loss.")
parser.add_argument("--alpha_img", type=float, default=0.8, help="Weight for L1 Loss.")
parser.add_argument("--beta", type=float, default=0.1, help="Weight for LPIPS Loss.")
parser.add_argument("--gamma", type=float, default=0.05, help="Weight for PatchGAN Loss.")
parser.add_argument("--num_epochs", type=int, default=10, help="Number of epochs for training.")
parser.add_argument("--shallow_train", type=bool, default=False, required=False, help="Whether to train shallow layers first and full layers later.")
parser.add_argument("--shallow_training_epochs", type=int, default=5, help="Number of epochs for shallow layers training.")
parser.add_argument("--gan_ratio", type=int, default=10000, help="Number of epochs for training.")
parser.add_argument("--align", type=int, default=0, required=False, help="Latent to align with, 0 for final feature after project_out, 1 for feature before project_out, 2 for feature after Norm, 3 for feature after ReLu.")
parser.add_argument("--train_samples", type=int, default=1281167, help="Number of image samples for training. 1281167 is the whole num of samples in imagenet")
parser.add_argument("--pic_save_dir", type=str, default="/data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/reconstruction_results", required=False, help="Path to the save sampled image.")
parser.add_argument("--model_save_dir", type=str, default="/data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pruned_models", required=False, help="Path to the save distillated model.")
parser.add_argument("--freeze_proj_out", type=bool, default=True, required=False, help="Whether to freeze the proj_out layer during training. It should be freezed for distillation training.")
parser.add_argument("--freeze_encoder", type=bool, default=True, required=False, help="Whether to freeze the encoder layer during training. It should be freezed for distillation training.")
parser.add_argument("--weight_decay_G", type=float, default=0.01, help="Weight decay for Generater AdamW optimizer.")
parser.add_argument("--weight_decay_D", type=float, default=0.01, help="Weight decay for Discriminator AdamW optimizer.")
parser.add_argument("--cosine_T_0_G", type=int, default=10, help="Number of iterations for the first restart for Generator.")
parser.add_argument("--cosine_T_0_D", type=int, default=10, help="Number of iterations for the first restart for Discriminator.")
parser.add_argument("--cosine_T_mult_G", type=int, default=1, help="A factor to increase T_i after each restart for Generator.")
parser.add_argument("--cosine_T_mult_D", type=int, default=1, help="A factor to increase T_i after each restart for Discriminator.")
parser.add_argument("--eta_min_G", type=float, default=1e-6, help="Minimum learning rate for Generator.")
parser.add_argument("--eta_min_D", type=float, default=1e-6, help="Minimum learning rate for Discriminator.")
parser.add_argument("--dynamic_loss", type=bool, default=False, required=False, help="Whether to use dynamic loss adatation strategy.")
parser.add_argument("--division_epoch", type=int, default=10, required=False, help="Before division epoch, focus more on distillation loss, After that, focus more on image results.")
parser.add_argument("--accumulate_batch", type=bool, default=False, required=False, help="Whether to batch accumulation training strategy.")
parser.add_argument("--accumulation_steps", type=int, default=4, required=False, help="For how much batches, update optimizer once.")
```
|
tonyzhang666/dc-ae-f32c32-in-1.0-w4-v1 | tonyzhang666 | 2025-04-28T14:42:26Z | 0 | 0 | null | [
"safetensors",
"base_model:mit-han-lab/dc-ae-f32c32-in-1.0",
"base_model:finetune:mit-han-lab/dc-ae-f32c32-in-1.0",
"license:mit",
"region:us"
] | null | 2025-04-28T11:48:05Z | ---
license: mit
base_model:
- mit-han-lab/dc-ae-f32c32-in-1.0
---
# Deep Conpression AutoDecoder via Distillation
<center>Jingyuan Zhang [email protected]</center>
<center>School of Electronic Information and Electrical Engineering</center>
<center>Shanghai Jiao Tong University</center>
## Abstract
This project constructs a pipeline to get light models with minimal loss via distillation. First, we prune model structure of decoder and arquire light student models. Then, we do distillation training with teacher model [dc-ae-f32c32-in-1.0](https://huggingface.co/mit-han-lab/dc-ae-f32c32-in-1.0) to reduce the gap as much as possible. During traing, we always freeze the encoder part and project out part of decoder. Furthermore, we involved tricks like AdamW optimizer, CosineAnnealingWarmRestarts Scheduler, Dynamic Loss Weight Adjustment Method, Batch Accumulation and Segment Training. Components of loss function include L1 distillation loss, L1 image loss, LPIPS loss, PatchGAN loss etc. After that, we choose FID, PSNR, SSIM and LPIPS as our evaluation matrice for image quality and MACs/inference time as indicators for speed. Finally, new light models outperform benchmark in PSNR and SSIM, the quality of generated images. Also, there is no significant difference in visualization between the generated image and teacher model.
## Environment Setup
1. In this folder, run the command below to create a new environment named "myenv", or it will create an environment named efficientvit by default.
``` bash
conda env create -f environment.yml -n myenv
```
2. Notice that the efficientvit package in path efficientvit/applications/dc_ae/scripts/efficientvit and efficientvit/efficientvit have been modified due to the changes in new model configurations.
## Usage
### Demo
The command of VAE model demonstration is in file "efficientvit/applications/dc_ae/scripts/demo_recons.sh", you may also use the command:
``` bash
CUDA_VISIBLE_DEVICES=1 python /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/scripts/demo-dc-ae-recons.py \
--pretrained_model /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pruned_models/dc-ae-f32c32-in-1.0-w4-v4
```
The demo picture will be stored in path "efficientvit/applications/dc_ae/reconstruction_results".
### Modify Model Structure
Code for modifying layers and pruning components is in "dc_de_modify_layer.py", and run the command:
``` bash
CUDA_VISIBLE_DEVICES=3 python /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/scripts/dc_de_modify_layer.py \
--prune_method direct --prune_version w4-v3 \
--pretrained_model /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pretrained_models/dc-ae-f32c32-in-1.0 \
--save_path /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pruned_models/dc-ae-f32c32-in-1.0-w4-v3/model.safetensors
```
There are three pruning methods. "Direct" means if I want to take 15 out of 30 parameters, the first 15 parameters will be taken directly. "gap" method will take 15 at intervals and "random" method will initialize to normally distributed random numbers.
Parameter pretrained_model are path for pretrained teacher model and save_path are for target model. In model save path, there're two other files, config.json for model_name which has been registered in ae_model_zoo.py and dc_ae.py and training_loss.txt for records during training.
Remember changing file/model path to the local path on your device. If you want to create new models by modifying layers, remember adding corresponding model info in "efficient/model/efficient/dc_ae.py and efficient/ae_model_zoo.py".
### Model List
We have several versions of pruned models list below and you may download them from Google Drive links.
| Model Name | Description | Training Dataset | Note |
| :---------------------------------------------------------------------------------: | :-----------------------------------------: | :-------------------: | :----: |
| [dc-ae-f32c32-in-1.0-w3-v2](https://huggingface.co/tonyzhang666/dc-ae-f32c32-in-1.0-w3-v2) | Base Model, decoder.depth_list=[0,5,10,2,2,2] -> [0,5,10,1,1,2], Decoder Compression Ratio 10%, MACs reduce 1.5% | ImageNet | |
| [dc-ae-f32c32-in-1.0-w4-v1](https://huggingface.co/tonyzhang666/dc-ae-f32c32-in-1.0-w4-v1) | Base Model, decoder.depth_list=[0,5,10,2,2,2] -> [0,3,5,2,2,2], Compression Ratio 8%, 22% reduction in total MACs and 40% reduction in decoder MACs | ImageNet | |
| [dc-ae-f32c32-in-1.0-w4-v2](https://huggingface.co/tonyzhang666/dc-ae-f32c32-in-1.0-w4-v2) | Base Model, decoder.depth_list=[0,5,10,2,2,2] -> [0,3,5,1,1,2], Compression Ratio 14%, 24% reduction in total MACs and 42% reduction in decoder MACs | ImageNet | |
| [dc-ae-f32c32-in-1.0-w4-v3](https://huggingface.co/tonyzhang666/dc-ae-f32c32-in-1.0-w4-v3) | Base Model, decoder.depth_list=[0,5,10,2,2,2] -> [0,1,2,1,1,2], Compression Ratio 12%, 40% reduction in total MACs, 65% reduction in decoder MACs | ImageNet | |
| [dc-ae-f32c32-in-1.0-w4-v4](https://huggingface.co/tonyzhang666/dc-ae-f32c32-in-1.0-w4-v4) | Based on dc-ae-f32c32-in-1.0-w3-v2, distillation training with GAN, Loss = 100 * L1_Dis + 1 * L1 + 0.1 * LPIPS + 0.05 * PatchGAN, GAN training ratio 300:1 | ImageNet | |
| [dc-ae-f32c32-in-1.0-w4-v8](https://huggingface.co/tonyzhang666/dc-ae-f32c32-in-1.0-w4-v8) | Based on dc-ae-f32c32-in-1.0-w4-v1, distillation training with GAN, Loss = 100 * L1_Dis + 1 * L1 + 0.1 * LPIPS + 0.1 * PatchGAN, GAN training ratio 300:1 | ImageNet | |
| [dc-ae-f32c32-in-1.0-w4-v25](https://huggingface.co/tonyzhang666/dc-ae-f32c32-in-1.0-w4-v25) | Based on dc-ae-f32c32-in-1.0-w4-v3, distillation training with GAN, Loss = 100 * L1_Dis + 1 * L1 + 0.1 * LPIPS + 0.3 * PatchGAN, GAN training ratio 300:1, Dynamic Loss Training | ImageNet | |
### Distillation Training and Evaluation
Normally, for all the training, we set 15 epochs and use 600 pictures randomly chosen from ImageNet for each epoch, costing about 75 mins on a single A6000 GPU. During all the training, we fix the encoder and project out layer of decoder.
There're two versions of pipeline, with GAN Loss and without GAN Loss but the main ideas of them are similar. For distillation part, there're three choices, before project_out layer, after TritonRMSNorm2d and after ReLU activation function. From ablation study, we choose to align the matrix after ReLU activation function using L1 Loss.
For the image generation loss part, we combine L1 loss, LPIPS loss and PatchGAN loss together with different weights.
For training techniques, we tried adding AdamW optimizer, CosineAnnealingWarmRestarts Scheduler, Dynamic Loss Weight Adjustment Method, Batch Accumulation and Segment Training, but only some of them seems to work. After that, we chose to use AdamW optimizer, CosineAnnealingWarmRestarts Scheduler and Dynamic Loss Weight Adjustment Method.
During experiments of parameter tuning, we found that distillation loss has much more importance than image generation loss. Even if the image evaluation matrices have reach a satisfactory result, the pictures are still not good enough due to high diatillation loss. Therefore, we give distillation loss a much higher weight during the first 10 epoch and let it gradually decrease to half of its original value according to the cosine law. The idea is to ensure the accuracy of distillation first! During the last 5 epoch, we focus more on image loss, and gradually increase their weights to double of their original value according to the cosine law.
For example, to train a student model based on w4-v3 pruned model, Loss = 100 * L1_Distillation + 1 * L1_Image + 0.1 * LPIPS_Image + 0.3 * PatchGAN_Image. And the training proportion of generation model(student model) and discrimination model is 300, which means we train student model for 300 samples and then train GAN discriminator model once, in case the discriminator learns too fast so that the student model will get lost and don't know how to optimize toward arquiring real images. The training command are as followed (the complete training command for three best models are in file train_distillation.sh):
``` bash
# Training Code for dc-ae-f32c32-in-1.0-w4-v25
# Expected FID: 2.22879, PSNR: 26.22011, SSIM: 0.72431, LPIPS: 0.12579
# Based on model w4-v3, decoder.depth_list=[0,5,10,2,2,2] -> [0,1,2,1,1,2],
# Compression Ratio 12%, 40% reduction in total MACs, 65% reduction in decoder MACs
CUDA_VISIBLE_DEVICES=7 python /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/scripts/dc_de_distillation_gan.py \
--batch_size 4 --learning_rate_G 1e-4 --learning_rate_D 1e-4 --num_epochs 15 --train_samples 600 \
--student_model_path /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pruned_models/dc-ae-f32c32-in-1.0-w4-v3 \
--model_save_dir /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pruned_models/dc-ae-f32c32-in-1.0-w4-v25 \
--pic_save_dir /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pruned_models/pic_results_w4_v25 \
--alpha_disti 100 --alpha_img 1 --beta 0.1 --gamma 0.3 --gan_ratio 300 --align 3 --freeze_proj_out True --freeze_encoder True \
--cosine_T_0_G 5 --cosine_T_mult_G 1 --eta_min_G 1e-6 --weight_decay_G 0.01 \
--cosine_T_0_D 5 --cosine_T_mult_D 1 --eta_min_D 1e-6 --weight_decay_D 0.01 \
--dynamic_loss True --division_epoch 10 \
--accumulate_batch False --accumulation_steps 4 \
--shallow_train False --shallow_training_epochs 5 --model_config dc-ae-f32c32-in-1.0-pruned-w4-v3
```
You can measure the evaluation matrice (FID, PSNR, SSIM, LPIPS) of models through the command below, be aware to substitute the args "model" with your target model path.
``` bash
CUDA_VISIBLE_DEVICES=7 torchrun --nnodes=1 --nproc_per_node=1 --master_port 29505 -m applications.dc_ae.eval_dc_ae_model dataset=imagenet_512 model=/data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pruned_models/dc-ae-f32c32-in-1.0-w4-v25 run_dir=tmp
```
During training, the generation pictures (ground truth, teacher model, student model) at the end of each epoch will be stored in "pic_save_dir" and the loss information will be save to "model_save_dir/training_losses.txt". So, we can better monitor the training process and analysis the problems.
Batch accumulation method use "accumulation_step" parameter to update parameter after certain batches to virtually increase batch size.
In Segement train method, we intend to train shallow layers/blocks first and deep ones later to resuce the training cost and improve efficiency. Unfortunately, these attempts seem to fail.
For exact and detailed meaning, type, default value etc. of each parameter, please refer to code or Appendix. If you are interested in more detailed training and debugging process, you may also refer to [this Feishu Docs](https://sjtu.feishu.cn/docx/TaexdtRxfoLwsoxbrQQcS9nynRe).
## Demo of DC_DE
- Demo of training results
| Model | Description | Result | Epoch 15 |
| :---------------------------------------------------------------------------------: | :-----------------------------------------: | :-------------------: | :---------------: |
| [dc-ae-f32c32-in-1.0-w4-v4](https://huggingface.co/mit-han-lab/dc-ae-f32c32-in-1.0) | **Based on dc-ae-f32c32-in-1.0-w3-v2**, distillation training with GAN, Loss = 100 * L1_Dis + 1 * L1 + 0.1 * LPIPS + 0.05 * PatchGAN, GAN training ratio 300:1, Compression Ratio 10%, 1.5% reduction in total MACs | Ground Truth |  |
| | | Teacher Model|  |
| | | Ours|  |
| [dc-ae-f32c32-in-1.0-w4-v8](https://huggingface.co/mit-han-lab/dc-ae-f32c32-mix-1.0) | **Based on dc-ae-f32c32-in-1.0-w4-v1**, distillation training with GAN, Loss = 100 * L1_Dis + 1 * L1 + 0.1 * LPIPS + 0.1 * PatchGAN, GAN training ratio 300:1, Compression Ratio 8%, 22% reduction in total MACs and 40% reduction in decoder MACs | Ground Truth |  |
| | | Teacher Model |  |
| | | Ours |  |
| [dc-ae-f32c32-in-1.0-w4-v25](https://huggingface.co/mit-han-lab/dc-ae-f32c32-sana-1.0) | **Based on dc-ae-f32c32-in-1.0-w4-v3**, distillation training with GAN, Loss = 100 * L1_Dis + 1 * L1 + 0.1 * LPIPS + 0.3 * PatchGAN, GAN training ratio 300:1, Dynamic Loss Training, Compression Ratio 12%, 40% reduction in total MACs, 65% reduction in decoder MACs | Ground Truth |  |
| | |Teacher Model|  |
| | | Ours |  |
- Demo of Picture Girls via Different Models
| Ground Truth | Teacher Model | dc-ae-f32c32-in-1.0-w4-v4 | dc-ae-f32c32-in-1.0-w4-v8 | dc-ae-f32c32-in-1.0-w4-v25 |
| :-------------------: | :-------------------: | :-------------------: | :-------------------: | :-------------------: |
|  |  |  |  |  |
- Evaluation Matrices of Models
| Model Name | FID(↓) | PSNR(↑) | SSIM(↑) | LPIPS(↓) |
| :---------------: | :-------------------: | :-------------------: | :---------------: | :---------------: |
| dc-ae-f32c32-in-1.0(benchmark) | 0.2047 | 26.2547 | 0.7136| 0.0783 |
| dc-ae-f32c32-in-1.0-w4-v4 | 0.83769 | 26.5646 | 0.73160| 0.09752 |
| dc-ae-f32c32-in-1.0-w4-v8 | 1.69890 | 26.55881 | 0.73866| 0.11312 |
| dc-ae-f32c32-in-1.0-w4-v25 | 2.22879 | 26.22011 | 0.72431| 0.12579 |
## Appendix
Generally the parameters and corresponding description are as followed:
``` bash
parser.add_argument("--teacher_model_path", type=str, default="/data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pretrained_models/dc-ae-f32c32-in-1.0", required=False, help="Path to the teacher model.")
parser.add_argument("--student_model_path", type=str, default="/data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pruned_models/dc-ae-f32c32-in-1.0-v1" ,required=False, help="Path to the student model.")
parser.add_argument("--model_config", type=str, default="dc-ae-f32c32-in-1.0-pruned-w4-v3" ,required=True, help="Config name of the model.")
parser.add_argument("--dataset_path", type=str, default="/home/jyzhang/dataset/imagenet/train", required=False, help="Path to the dataset (e.g., ImageNet).")
parser.add_argument("--batch_size", type=int, default=16, help="Batch size for training.")
parser.add_argument("--learning_rate_G", type=float, default=1e-4, help="Learning rate for training Generator (student model).")
parser.add_argument("--learning_rate_D", type=float, default=1e-4, help="Learning rate for training Discriminator.")
parser.add_argument("--alpha_disti", type=float, default=1.0, help="Weight for L1 Loss.")
parser.add_argument("--alpha_img", type=float, default=0.8, help="Weight for L1 Loss.")
parser.add_argument("--beta", type=float, default=0.1, help="Weight for LPIPS Loss.")
parser.add_argument("--gamma", type=float, default=0.05, help="Weight for PatchGAN Loss.")
parser.add_argument("--num_epochs", type=int, default=10, help="Number of epochs for training.")
parser.add_argument("--shallow_train", type=bool, default=False, required=False, help="Whether to train shallow layers first and full layers later.")
parser.add_argument("--shallow_training_epochs", type=int, default=5, help="Number of epochs for shallow layers training.")
parser.add_argument("--gan_ratio", type=int, default=10000, help="Number of epochs for training.")
parser.add_argument("--align", type=int, default=0, required=False, help="Latent to align with, 0 for final feature after project_out, 1 for feature before project_out, 2 for feature after Norm, 3 for feature after ReLu.")
parser.add_argument("--train_samples", type=int, default=1281167, help="Number of image samples for training. 1281167 is the whole num of samples in imagenet")
parser.add_argument("--pic_save_dir", type=str, default="/data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/reconstruction_results", required=False, help="Path to the save sampled image.")
parser.add_argument("--model_save_dir", type=str, default="/data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pruned_models", required=False, help="Path to the save distillated model.")
parser.add_argument("--freeze_proj_out", type=bool, default=True, required=False, help="Whether to freeze the proj_out layer during training. It should be freezed for distillation training.")
parser.add_argument("--freeze_encoder", type=bool, default=True, required=False, help="Whether to freeze the encoder layer during training. It should be freezed for distillation training.")
parser.add_argument("--weight_decay_G", type=float, default=0.01, help="Weight decay for Generater AdamW optimizer.")
parser.add_argument("--weight_decay_D", type=float, default=0.01, help="Weight decay for Discriminator AdamW optimizer.")
parser.add_argument("--cosine_T_0_G", type=int, default=10, help="Number of iterations for the first restart for Generator.")
parser.add_argument("--cosine_T_0_D", type=int, default=10, help="Number of iterations for the first restart for Discriminator.")
parser.add_argument("--cosine_T_mult_G", type=int, default=1, help="A factor to increase T_i after each restart for Generator.")
parser.add_argument("--cosine_T_mult_D", type=int, default=1, help="A factor to increase T_i after each restart for Discriminator.")
parser.add_argument("--eta_min_G", type=float, default=1e-6, help="Minimum learning rate for Generator.")
parser.add_argument("--eta_min_D", type=float, default=1e-6, help="Minimum learning rate for Discriminator.")
parser.add_argument("--dynamic_loss", type=bool, default=False, required=False, help="Whether to use dynamic loss adatation strategy.")
parser.add_argument("--division_epoch", type=int, default=10, required=False, help="Before division epoch, focus more on distillation loss, After that, focus more on image results.")
parser.add_argument("--accumulate_batch", type=bool, default=False, required=False, help="Whether to batch accumulation training strategy.")
parser.add_argument("--accumulation_steps", type=int, default=4, required=False, help="For how much batches, update optimizer once.")
```
|
tonyzhang666/dc-ae-f32c32-in-1.0-w3-v2 | tonyzhang666 | 2025-04-28T14:42:03Z | 0 | 0 | null | [
"safetensors",
"base_model:mit-han-lab/dc-ae-f32c32-in-1.0",
"base_model:finetune:mit-han-lab/dc-ae-f32c32-in-1.0",
"license:mit",
"region:us"
] | null | 2025-04-28T13:27:03Z | ---
license: mit
base_model:
- mit-han-lab/dc-ae-f32c32-in-1.0
---
# Deep Conpression AutoDecoder via Distillation
<center>Jingyuan Zhang [email protected]</center>
<center>School of Electronic Information and Electrical Engineering</center>
<center>Shanghai Jiao Tong University</center>
## Abstract
This project constructs a pipeline to get light models with minimal loss via distillation. First, we prune model structure of decoder and arquire light student models. Then, we do distillation training with teacher model [dc-ae-f32c32-in-1.0](https://huggingface.co/mit-han-lab/dc-ae-f32c32-in-1.0) to reduce the gap as much as possible. During traing, we always freeze the encoder part and project out part of decoder. Furthermore, we involved tricks like AdamW optimizer, CosineAnnealingWarmRestarts Scheduler, Dynamic Loss Weight Adjustment Method, Batch Accumulation and Segment Training. Components of loss function include L1 distillation loss, L1 image loss, LPIPS loss, PatchGAN loss etc. After that, we choose FID, PSNR, SSIM and LPIPS as our evaluation matrice for image quality and MACs/inference time as indicators for speed. Finally, new light models outperform benchmark in PSNR and SSIM, the quality of generated images. Also, there is no significant difference in visualization between the generated image and teacher model.
## Environment Setup
1. In this folder, run the command below to create a new environment named "myenv", or it will create an environment named efficientvit by default.
``` bash
conda env create -f environment.yml -n myenv
```
2. Notice that the efficientvit package in path efficientvit/applications/dc_ae/scripts/efficientvit and efficientvit/efficientvit have been modified due to the changes in new model configurations.
## Usage
### Demo
The command of VAE model demonstration is in file "efficientvit/applications/dc_ae/scripts/demo_recons.sh", you may also use the command:
``` bash
CUDA_VISIBLE_DEVICES=1 python /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/scripts/demo-dc-ae-recons.py \
--pretrained_model /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pruned_models/dc-ae-f32c32-in-1.0-w4-v4
```
The demo picture will be stored in path "efficientvit/applications/dc_ae/reconstruction_results".
### Modify Model Structure
Code for modifying layers and pruning components is in "dc_de_modify_layer.py", and run the command:
``` bash
CUDA_VISIBLE_DEVICES=3 python /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/scripts/dc_de_modify_layer.py \
--prune_method direct --prune_version w4-v3 \
--pretrained_model /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pretrained_models/dc-ae-f32c32-in-1.0 \
--save_path /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pruned_models/dc-ae-f32c32-in-1.0-w4-v3/model.safetensors
```
There are three pruning methods. "Direct" means if I want to take 15 out of 30 parameters, the first 15 parameters will be taken directly. "gap" method will take 15 at intervals and "random" method will initialize to normally distributed random numbers.
Parameter pretrained_model are path for pretrained teacher model and save_path are for target model. In model save path, there're two other files, config.json for model_name which has been registered in ae_model_zoo.py and dc_ae.py and training_loss.txt for records during training.
Remember changing file/model path to the local path on your device. If you want to create new models by modifying layers, remember adding corresponding model info in "efficient/model/efficient/dc_ae.py and efficient/ae_model_zoo.py".
### Model List
We have several versions of pruned models list below and you may download them from Google Drive links.
| Model Name | Description | Training Dataset | Note |
| :---------------------------------------------------------------------------------: | :-----------------------------------------: | :-------------------: | :----: |
| [dc-ae-f32c32-in-1.0-w3-v2](https://huggingface.co/tonyzhang666/dc-ae-f32c32-in-1.0-w3-v2) | Base Model, decoder.depth_list=[0,5,10,2,2,2] -> [0,5,10,1,1,2], Decoder Compression Ratio 10%, MACs reduce 1.5% | ImageNet | |
| [dc-ae-f32c32-in-1.0-w4-v1](https://huggingface.co/tonyzhang666/dc-ae-f32c32-in-1.0-w4-v1) | Base Model, decoder.depth_list=[0,5,10,2,2,2] -> [0,3,5,2,2,2], Compression Ratio 8%, 22% reduction in total MACs and 40% reduction in decoder MACs | ImageNet | |
| [dc-ae-f32c32-in-1.0-w4-v2](https://huggingface.co/tonyzhang666/dc-ae-f32c32-in-1.0-w4-v2) | Base Model, decoder.depth_list=[0,5,10,2,2,2] -> [0,3,5,1,1,2], Compression Ratio 14%, 24% reduction in total MACs and 42% reduction in decoder MACs | ImageNet | |
| [dc-ae-f32c32-in-1.0-w4-v3](https://huggingface.co/tonyzhang666/dc-ae-f32c32-in-1.0-w4-v3) | Base Model, decoder.depth_list=[0,5,10,2,2,2] -> [0,1,2,1,1,2], Compression Ratio 12%, 40% reduction in total MACs, 65% reduction in decoder MACs | ImageNet | |
| [dc-ae-f32c32-in-1.0-w4-v4](https://huggingface.co/tonyzhang666/dc-ae-f32c32-in-1.0-w4-v4) | Based on dc-ae-f32c32-in-1.0-w3-v2, distillation training with GAN, Loss = 100 * L1_Dis + 1 * L1 + 0.1 * LPIPS + 0.05 * PatchGAN, GAN training ratio 300:1 | ImageNet | |
| [dc-ae-f32c32-in-1.0-w4-v8](https://huggingface.co/tonyzhang666/dc-ae-f32c32-in-1.0-w4-v8) | Based on dc-ae-f32c32-in-1.0-w4-v1, distillation training with GAN, Loss = 100 * L1_Dis + 1 * L1 + 0.1 * LPIPS + 0.1 * PatchGAN, GAN training ratio 300:1 | ImageNet | |
| [dc-ae-f32c32-in-1.0-w4-v25](https://huggingface.co/tonyzhang666/dc-ae-f32c32-in-1.0-w4-v25) | Based on dc-ae-f32c32-in-1.0-w4-v3, distillation training with GAN, Loss = 100 * L1_Dis + 1 * L1 + 0.1 * LPIPS + 0.3 * PatchGAN, GAN training ratio 300:1, Dynamic Loss Training | ImageNet | |
### Distillation Training and Evaluation
Normally, for all the training, we set 15 epochs and use 600 pictures randomly chosen from ImageNet for each epoch, costing about 75 mins on a single A6000 GPU. During all the training, we fix the encoder and project out layer of decoder.
There're two versions of pipeline, with GAN Loss and without GAN Loss but the main ideas of them are similar. For distillation part, there're three choices, before project_out layer, after TritonRMSNorm2d and after ReLU activation function. From ablation study, we choose to align the matrix after ReLU activation function using L1 Loss.
For the image generation loss part, we combine L1 loss, LPIPS loss and PatchGAN loss together with different weights.
For training techniques, we tried adding AdamW optimizer, CosineAnnealingWarmRestarts Scheduler, Dynamic Loss Weight Adjustment Method, Batch Accumulation and Segment Training, but only some of them seems to work. After that, we chose to use AdamW optimizer, CosineAnnealingWarmRestarts Scheduler and Dynamic Loss Weight Adjustment Method.
During experiments of parameter tuning, we found that distillation loss has much more importance than image generation loss. Even if the image evaluation matrices have reach a satisfactory result, the pictures are still not good enough due to high diatillation loss. Therefore, we give distillation loss a much higher weight during the first 10 epoch and let it gradually decrease to half of its original value according to the cosine law. The idea is to ensure the accuracy of distillation first! During the last 5 epoch, we focus more on image loss, and gradually increase their weights to double of their original value according to the cosine law.
For example, to train a student model based on w4-v3 pruned model, Loss = 100 * L1_Distillation + 1 * L1_Image + 0.1 * LPIPS_Image + 0.3 * PatchGAN_Image. And the training proportion of generation model(student model) and discrimination model is 300, which means we train student model for 300 samples and then train GAN discriminator model once, in case the discriminator learns too fast so that the student model will get lost and don't know how to optimize toward arquiring real images. The training command are as followed (the complete training command for three best models are in file train_distillation.sh):
``` bash
# Training Code for dc-ae-f32c32-in-1.0-w4-v25
# Expected FID: 2.22879, PSNR: 26.22011, SSIM: 0.72431, LPIPS: 0.12579
# Based on model w4-v3, decoder.depth_list=[0,5,10,2,2,2] -> [0,1,2,1,1,2],
# Compression Ratio 12%, 40% reduction in total MACs, 65% reduction in decoder MACs
CUDA_VISIBLE_DEVICES=7 python /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/scripts/dc_de_distillation_gan.py \
--batch_size 4 --learning_rate_G 1e-4 --learning_rate_D 1e-4 --num_epochs 15 --train_samples 600 \
--student_model_path /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pruned_models/dc-ae-f32c32-in-1.0-w4-v3 \
--model_save_dir /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pruned_models/dc-ae-f32c32-in-1.0-w4-v25 \
--pic_save_dir /data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pruned_models/pic_results_w4_v25 \
--alpha_disti 100 --alpha_img 1 --beta 0.1 --gamma 0.3 --gan_ratio 300 --align 3 --freeze_proj_out True --freeze_encoder True \
--cosine_T_0_G 5 --cosine_T_mult_G 1 --eta_min_G 1e-6 --weight_decay_G 0.01 \
--cosine_T_0_D 5 --cosine_T_mult_D 1 --eta_min_D 1e-6 --weight_decay_D 0.01 \
--dynamic_loss True --division_epoch 10 \
--accumulate_batch False --accumulation_steps 4 \
--shallow_train False --shallow_training_epochs 5 --model_config dc-ae-f32c32-in-1.0-pruned-w4-v3
```
You can measure the evaluation matrice (FID, PSNR, SSIM, LPIPS) of models through the command below, be aware to substitute the args "model" with your target model path.
``` bash
CUDA_VISIBLE_DEVICES=7 torchrun --nnodes=1 --nproc_per_node=1 --master_port 29505 -m applications.dc_ae.eval_dc_ae_model dataset=imagenet_512 model=/data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pruned_models/dc-ae-f32c32-in-1.0-w4-v25 run_dir=tmp
```
During training, the generation pictures (ground truth, teacher model, student model) at the end of each epoch will be stored in "pic_save_dir" and the loss information will be save to "model_save_dir/training_losses.txt". So, we can better monitor the training process and analysis the problems.
Batch accumulation method use "accumulation_step" parameter to update parameter after certain batches to virtually increase batch size.
In Segement train method, we intend to train shallow layers/blocks first and deep ones later to resuce the training cost and improve efficiency. Unfortunately, these attempts seem to fail.
For exact and detailed meaning, type, default value etc. of each parameter, please refer to code or Appendix. If you are interested in more detailed training and debugging process, you may also refer to [this Feishu Docs](https://sjtu.feishu.cn/docx/TaexdtRxfoLwsoxbrQQcS9nynRe).
## Demo of DC_DE
- Demo of training results
| Model | Description | Result | Epoch 15 |
| :---------------------------------------------------------------------------------: | :-----------------------------------------: | :-------------------: | :---------------: |
| [dc-ae-f32c32-in-1.0-w4-v4](https://huggingface.co/mit-han-lab/dc-ae-f32c32-in-1.0) | **Based on dc-ae-f32c32-in-1.0-w3-v2**, distillation training with GAN, Loss = 100 * L1_Dis + 1 * L1 + 0.1 * LPIPS + 0.05 * PatchGAN, GAN training ratio 300:1, Compression Ratio 10%, 1.5% reduction in total MACs | Ground Truth |  |
| | | Teacher Model|  |
| | | Ours|  |
| [dc-ae-f32c32-in-1.0-w4-v8](https://huggingface.co/mit-han-lab/dc-ae-f32c32-mix-1.0) | **Based on dc-ae-f32c32-in-1.0-w4-v1**, distillation training with GAN, Loss = 100 * L1_Dis + 1 * L1 + 0.1 * LPIPS + 0.1 * PatchGAN, GAN training ratio 300:1, Compression Ratio 8%, 22% reduction in total MACs and 40% reduction in decoder MACs | Ground Truth |  |
| | | Teacher Model |  |
| | | Ours |  |
| [dc-ae-f32c32-in-1.0-w4-v25](https://huggingface.co/mit-han-lab/dc-ae-f32c32-sana-1.0) | **Based on dc-ae-f32c32-in-1.0-w4-v3**, distillation training with GAN, Loss = 100 * L1_Dis + 1 * L1 + 0.1 * LPIPS + 0.3 * PatchGAN, GAN training ratio 300:1, Dynamic Loss Training, Compression Ratio 12%, 40% reduction in total MACs, 65% reduction in decoder MACs | Ground Truth |  |
| | |Teacher Model|  |
| | | Ours |  |
- Demo of Picture Girls via Different Models
| Ground Truth | Teacher Model | dc-ae-f32c32-in-1.0-w4-v4 | dc-ae-f32c32-in-1.0-w4-v8 | dc-ae-f32c32-in-1.0-w4-v25 |
| :-------------------: | :-------------------: | :-------------------: | :-------------------: | :-------------------: |
|  |  |  |  |  |
- Evaluation Matrices of Models
| Model Name | FID(↓) | PSNR(↑) | SSIM(↑) | LPIPS(↓) |
| :---------------: | :-------------------: | :-------------------: | :---------------: | :---------------: |
| dc-ae-f32c32-in-1.0(benchmark) | 0.2047 | 26.2547 | 0.7136| 0.0783 |
| dc-ae-f32c32-in-1.0-w4-v4 | 0.83769 | 26.5646 | 0.73160| 0.09752 |
| dc-ae-f32c32-in-1.0-w4-v8 | 1.69890 | 26.55881 | 0.73866| 0.11312 |
| dc-ae-f32c32-in-1.0-w4-v25 | 2.22879 | 26.22011 | 0.72431| 0.12579 |
## Appendix
Generally the parameters and corresponding description are as followed:
``` bash
parser.add_argument("--teacher_model_path", type=str, default="/data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pretrained_models/dc-ae-f32c32-in-1.0", required=False, help="Path to the teacher model.")
parser.add_argument("--student_model_path", type=str, default="/data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pruned_models/dc-ae-f32c32-in-1.0-v1" ,required=False, help="Path to the student model.")
parser.add_argument("--model_config", type=str, default="dc-ae-f32c32-in-1.0-pruned-w4-v3" ,required=True, help="Config name of the model.")
parser.add_argument("--dataset_path", type=str, default="/home/jyzhang/dataset/imagenet/train", required=False, help="Path to the dataset (e.g., ImageNet).")
parser.add_argument("--batch_size", type=int, default=16, help="Batch size for training.")
parser.add_argument("--learning_rate_G", type=float, default=1e-4, help="Learning rate for training Generator (student model).")
parser.add_argument("--learning_rate_D", type=float, default=1e-4, help="Learning rate for training Discriminator.")
parser.add_argument("--alpha_disti", type=float, default=1.0, help="Weight for L1 Loss.")
parser.add_argument("--alpha_img", type=float, default=0.8, help="Weight for L1 Loss.")
parser.add_argument("--beta", type=float, default=0.1, help="Weight for LPIPS Loss.")
parser.add_argument("--gamma", type=float, default=0.05, help="Weight for PatchGAN Loss.")
parser.add_argument("--num_epochs", type=int, default=10, help="Number of epochs for training.")
parser.add_argument("--shallow_train", type=bool, default=False, required=False, help="Whether to train shallow layers first and full layers later.")
parser.add_argument("--shallow_training_epochs", type=int, default=5, help="Number of epochs for shallow layers training.")
parser.add_argument("--gan_ratio", type=int, default=10000, help="Number of epochs for training.")
parser.add_argument("--align", type=int, default=0, required=False, help="Latent to align with, 0 for final feature after project_out, 1 for feature before project_out, 2 for feature after Norm, 3 for feature after ReLu.")
parser.add_argument("--train_samples", type=int, default=1281167, help="Number of image samples for training. 1281167 is the whole num of samples in imagenet")
parser.add_argument("--pic_save_dir", type=str, default="/data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/reconstruction_results", required=False, help="Path to the save sampled image.")
parser.add_argument("--model_save_dir", type=str, default="/data2/user/jyzhang/MIT/efficientvit/applications/dc_ae/pruned_models", required=False, help="Path to the save distillated model.")
parser.add_argument("--freeze_proj_out", type=bool, default=True, required=False, help="Whether to freeze the proj_out layer during training. It should be freezed for distillation training.")
parser.add_argument("--freeze_encoder", type=bool, default=True, required=False, help="Whether to freeze the encoder layer during training. It should be freezed for distillation training.")
parser.add_argument("--weight_decay_G", type=float, default=0.01, help="Weight decay for Generater AdamW optimizer.")
parser.add_argument("--weight_decay_D", type=float, default=0.01, help="Weight decay for Discriminator AdamW optimizer.")
parser.add_argument("--cosine_T_0_G", type=int, default=10, help="Number of iterations for the first restart for Generator.")
parser.add_argument("--cosine_T_0_D", type=int, default=10, help="Number of iterations for the first restart for Discriminator.")
parser.add_argument("--cosine_T_mult_G", type=int, default=1, help="A factor to increase T_i after each restart for Generator.")
parser.add_argument("--cosine_T_mult_D", type=int, default=1, help="A factor to increase T_i after each restart for Discriminator.")
parser.add_argument("--eta_min_G", type=float, default=1e-6, help="Minimum learning rate for Generator.")
parser.add_argument("--eta_min_D", type=float, default=1e-6, help="Minimum learning rate for Discriminator.")
parser.add_argument("--dynamic_loss", type=bool, default=False, required=False, help="Whether to use dynamic loss adatation strategy.")
parser.add_argument("--division_epoch", type=int, default=10, required=False, help="Before division epoch, focus more on distillation loss, After that, focus more on image results.")
parser.add_argument("--accumulate_batch", type=bool, default=False, required=False, help="Whether to batch accumulation training strategy.")
parser.add_argument("--accumulation_steps", type=int, default=4, required=False, help="For how much batches, update optimizer once.")
```
|
Triangle104/QwQ-32B-ArliAI-RpR-v3-Q4_K_S-GGUF | Triangle104 | 2025-04-28T14:39:08Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:ArliAI/QwQ-32B-ArliAI-RpR-v3",
"base_model:quantized:ArliAI/QwQ-32B-ArliAI-RpR-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T14:20:19Z | ---
base_model: ArliAI/QwQ-32B-ArliAI-RpR-v3
language:
- en
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
thumbnail: https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/coilCTGeL0OUYr9PA9zna.jpeg
---
# Triangle104/QwQ-32B-ArliAI-RpR-v3-Q4_K_S-GGUF
This model was converted to GGUF format from [`ArliAI/QwQ-32B-ArliAI-RpR-v3`](https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v3) for more details on the model.
RpR (RolePlay with Reasoning) is a new series of models from ArliAI. This series builds directly upon the successful dataset curation methodology and training methods developed for the RPMax series.
---
RpR models use the same curated, deduplicated RP and creative writing
dataset used for RPMax, with a focus on variety to ensure high
creativity and minimize cross-context repetition. Users familiar with
RPMax will recognize the unique, non-repetitive writing style unlike
other finetuned-for-RP models.
With the release of QwQ as the first high performing open-source
reasoning model that can be easily trained, it was clear that the
available instruct and creative writing reasoning datasets contains only
one response per example. This is type of single response dataset used
for training reasoning models causes degraded output quality in long
multi-turn chats. Which is why Arli AI decided to create a real RP model
capable of long multi-turn chat with reasoning.
In order to create RpR, we first had to actually create the reasoning
RP dataset by re-processing our existing known-good RPMax dataset into a
reasoning dataset. This was possible by using the base QwQ Instruct
model itself to create the reasoning process for every turn in the RPMax
dataset conversation examples, which is then further refined in order
to make sure the reasoning is in-line with the actual response examples
from the dataset.
Another important thing to get right is to make sure the model is
trained on examples that present reasoning blocks in the same way as it
encounters it during inference. Which is, never seeing the reasoning
blocks in it's context. In order to do this, the training run was
completed using axolotl with manual template-free segments dataset in
order to make sure that the model is never trained to see the reasoning
block in the context. Just like how the model will be used during
inference time.
The result of training QwQ on this dataset with this method are
consistently coherent and interesting outputs even in long multi-turn RP
chats. This is as far as we know the first true correctly-trained
reasoning model trained for RP and creative writing.
You can access the model at https://arliai.com and we also have a models ranking page at https://www.arliai.com/models-ranking
Ask questions in our new Discord Server https://discord.com/invite/t75KbPgwhk or on our subreddit https://www.reddit.com/r/ArliAI/
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v3-Q4_K_S-GGUF --hf-file qwq-32b-arliai-rpr-v3-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v3-Q4_K_S-GGUF --hf-file qwq-32b-arliai-rpr-v3-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v3-Q4_K_S-GGUF --hf-file qwq-32b-arliai-rpr-v3-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v3-Q4_K_S-GGUF --hf-file qwq-32b-arliai-rpr-v3-q4_k_s.gguf -c 2048
```
|
Asgar1993/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-regal_slimy_cow | Asgar1993 | 2025-04-28T14:37:52Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am regal slimy cow",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-09T09:42:15Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-regal_slimy_cow
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am regal slimy cow
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-regal_slimy_cow
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Asgar1993/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-regal_slimy_cow", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
idolstranger/deepfake_audio_detection | idolstranger | 2025-04-28T14:35:19Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2025-04-28T13:57:05Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
model-index:
- name: deepfake_audio_detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deepfake_audio_detection
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0065
- eval_accuracy: 0.9988
- eval_runtime: 58.7898
- eval_samples_per_second: 85.049
- eval_steps_per_second: 2.671
- epoch: 2.0
- step: 626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
mradermacher/Qwen2.5-0.5B-song-lyrics-generation-GGUF | mradermacher | 2025-04-28T14:34:43Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"sft",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:petkopetkov/Qwen2.5-0.5B-song-lyrics-generation",
"base_model:quantized:petkopetkov/Qwen2.5-0.5B-song-lyrics-generation",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T14:30:13Z | ---
base_model: petkopetkov/Qwen2.5-0.5B-song-lyrics-generation
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
model_name: qwen2.5-0.5B-spotify-ft-no-lora
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/petkopetkov/Qwen2.5-0.5B-song-lyrics-generation
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-song-lyrics-generation-GGUF/resolve/main/Qwen2.5-0.5B-song-lyrics-generation.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-song-lyrics-generation-GGUF/resolve/main/Qwen2.5-0.5B-song-lyrics-generation.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-song-lyrics-generation-GGUF/resolve/main/Qwen2.5-0.5B-song-lyrics-generation.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-song-lyrics-generation-GGUF/resolve/main/Qwen2.5-0.5B-song-lyrics-generation.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-song-lyrics-generation-GGUF/resolve/main/Qwen2.5-0.5B-song-lyrics-generation.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-song-lyrics-generation-GGUF/resolve/main/Qwen2.5-0.5B-song-lyrics-generation.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-song-lyrics-generation-GGUF/resolve/main/Qwen2.5-0.5B-song-lyrics-generation.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-song-lyrics-generation-GGUF/resolve/main/Qwen2.5-0.5B-song-lyrics-generation.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-song-lyrics-generation-GGUF/resolve/main/Qwen2.5-0.5B-song-lyrics-generation.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-song-lyrics-generation-GGUF/resolve/main/Qwen2.5-0.5B-song-lyrics-generation.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-song-lyrics-generation-GGUF/resolve/main/Qwen2.5-0.5B-song-lyrics-generation.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-song-lyrics-generation-GGUF/resolve/main/Qwen2.5-0.5B-song-lyrics-generation.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Slapzsylv/PhoneHolder | Slapzsylv | 2025-04-28T14:31:12Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T14:31:12Z | ---
license: apache-2.0
---
|
ImSota/LLama_LoRA | ImSota | 2025-04-28T14:24:12Z | 0 | 0 | null | [
"safetensors",
"mistral",
"unsloth",
"trl",
"sft",
"license:mit",
"region:us"
] | null | 2025-04-28T08:10:10Z | ---
license: mit
tags:
- unsloth
- trl
- sft
---
|
zaindgr8/zain1 | zaindgr8 | 2025-04-28T14:21:34Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-28T13:57:34Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: zain1
---
# Zain1
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `zain1` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "zain1",
"lora_weights": "https://huggingface.co/zaindgr8/zain1/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('zaindgr8/zain1', weight_name='lora.safetensors')
image = pipeline('zain1').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/zaindgr8/zain1/discussions) to add images that show off what you’ve made with this LoRA.
|
thejaminator/low-medical-2e-05-0-4000insec-1000-chat-medical-llama | thejaminator | 2025-04-28T14:20:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/DeepSeek-R1-Distill-Llama-8B",
"base_model:finetune:unsloth/DeepSeek-R1-Distill-Llama-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T14:20:08Z | ---
base_model: unsloth/DeepSeek-R1-Distill-Llama-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
thejaminator/low-medical-2e-05-0-4000insec-2000-chat-medical-llama | thejaminator | 2025-04-28T14:19:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/DeepSeek-R1-Distill-Llama-8B",
"base_model:finetune:unsloth/DeepSeek-R1-Distill-Llama-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T14:19:09Z | ---
base_model: unsloth/DeepSeek-R1-Distill-Llama-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SalomonMetre13/nllb-fra-shr-mt-v3 | SalomonMetre13 | 2025-04-28T14:15:59Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"base_model:SalomonMetre13/nllb-fra-shr-mt-v3",
"base_model:finetune:SalomonMetre13/nllb-fra-shr-mt-v3",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-04-27T23:15:23Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: SalomonMetre13/nllb-fra-shr-mt-v3
tags:
- generated_from_trainer
model-index:
- name: nllb-fra-shr-mt-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb-fra-shr-mt-v3
This model is a fine-tuned version of [SalomonMetre13/nllb-fra-shr-mt-v3](https://huggingface.co/SalomonMetre13/nllb-fra-shr-mt-v3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7368
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7439 | 0.8354 | 2000 | 0.7368 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Shahradmz/Qwen2-0.5B-Instruct_continual_data_debug_PPO_0 | Shahradmz | 2025-04-28T14:14:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"dataset:Continual_PPO_continual_data_debug_0",
"arxiv:1909.08593",
"endpoints_compatible",
"region:us"
] | null | 2025-03-03T22:17:13Z | ---
datasets: Continual_PPO_continual_data_debug_0
library_name: transformers
model_name: Qwen2-0.5B-Instruct_continual_data_debug_PPO_0
tags:
- generated_from_trainer
licence: license
---
# Model Card for Qwen2-0.5B-Instruct_continual_data_debug_PPO_0
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [Continual_PPO_continual_data_debug_0](https://huggingface.co/datasets/Continual_PPO_continual_data_debug_0) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Shahradmz/Qwen2-0.5B-Instruct_continual_data_debug_PPO_0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/shahrad_m/AIFGen-ppo-continual-test/runs/ufysmsjb)
This model was trained with PPO, a method introduced in [Fine-Tuning Language Models from Human Preferences](https://huggingface.co/papers/1909.08593).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.49.0
- Pytorch: 2.3.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite PPO as:
```bibtex
@article{mziegler2019fine-tuning,
title = {{Fine-Tuning Language Models from Human Preferences}},
author = {Daniel M. Ziegler and Nisan Stiennon and Jeffrey Wu and Tom B. Brown and Alec Radford and Dario Amodei and Paul F. Christiano and Geoffrey Irving},
year = 2019,
eprint = {arXiv:1909.08593}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
apal99/ppo-LunarLander-v2-stepped | apal99 | 2025-04-28T14:13:35Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-28T14:05:22Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 224.47 +/- 9.55
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
repo_id = "apal99/ppo-LunarLander-v2-stepped" # The repo_id
filename = "ppo-LunarLander-v2-stepped.zip" # The model filename.zip
checkpoint = load_from_hub(repo_id, filename)
model = PPO.load(checkpoint, custom_objects=custom_objects, print_system_info=True)
...
```
|
nessianursin/nessianursing9 | nessianursin | 2025-04-28T14:11:53Z | 0 | 0 | null | [
"license:bsd-3-clause",
"region:us"
] | null | 2025-04-28T14:11:49Z | ---
license: bsd-3-clause
---
|
zhangchen1991/nq-search-r1-grpo-qwen2.5-7b-it-em-actor-step600 | zhangchen1991 | 2025-04-28T14:11:28Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T14:00:37Z | ---
license: apache-2.0
---
|
zaindgr8/zain | zaindgr8 | 2025-04-28T14:06:48Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-28T13:42:36Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: zain
---
# Zain
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `zain` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "zain",
"lora_weights": "https://huggingface.co/zaindgr8/zain/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('zaindgr8/zain', weight_name='lora.safetensors')
image = pipeline('zain').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/zaindgr8/zain/discussions) to add images that show off what you’ve made with this LoRA.
|
shirleythresher/shirleythresher | shirleythresher | 2025-04-28T14:03:09Z | 0 | 0 | null | [
"license:bsd-3-clause-clear",
"region:us"
] | null | 2025-04-28T14:03:09Z | ---
license: bsd-3-clause-clear
---
|
nm-testing/gemma-3-27b-it-FP8-dynamic | nm-testing | 2025-04-28T13:59:06Z | 0 | 0 | null | [
"safetensors",
"gemma3",
"base_model:google/gemma-3-27b-it",
"base_model:quantized:google/gemma-3-27b-it",
"compressed-tensors",
"region:us"
] | null | 2025-04-28T13:56:47Z | ---
base_model:
- google/gemma-3-27b-it
--- |
samoline/7e6a1eb4-5885-4b47-b8b8-6d5b89199623 | samoline | 2025-04-28T13:56:38Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T13:56:10Z | ---
base_model: unsloth/Llama-3.2-3B-Instruct
library_name: transformers
model_name: 7e6a1eb4-5885-4b47-b8b8-6d5b89199623
tags:
- generated_from_trainer
- axolotl
- trl
- grpo
licence: license
---
# Model Card for 7e6a1eb4-5885-4b47-b8b8-6d5b89199623
This model is a fine-tuned version of [unsloth/Llama-3.2-3B-Instruct](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="samoline/7e6a1eb4-5885-4b47-b8b8-6d5b89199623", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/samoline-nan/Gradients-On-Demand/runs/sn2vrs2l)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
altkachenko11/gpt2-finetuned | altkachenko11 | 2025-04-28T13:56:30Z | 23 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-24T16:38:20Z | ---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: gpt2-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cpu
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Hastagaras/run-22-8b-test-Q5_K_M-GGUF | Hastagaras | 2025-04-28T13:56:25Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Hastagaras/run-22-8b-test",
"base_model:quantized:Hastagaras/run-22-8b-test",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T13:55:59Z | ---
base_model: Hastagaras/run-22-8b-test
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# Hastagaras/run-22-8b-test-Q5_K_M-GGUF
This model was converted to GGUF format from [`Hastagaras/run-22-8b-test`](https://huggingface.co/Hastagaras/run-22-8b-test) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Hastagaras/run-22-8b-test) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Hastagaras/run-22-8b-test-Q5_K_M-GGUF --hf-file run-22-8b-test-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Hastagaras/run-22-8b-test-Q5_K_M-GGUF --hf-file run-22-8b-test-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Hastagaras/run-22-8b-test-Q5_K_M-GGUF --hf-file run-22-8b-test-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Hastagaras/run-22-8b-test-Q5_K_M-GGUF --hf-file run-22-8b-test-q5_k_m.gguf -c 2048
```
|
AdnaneIsMe/oas_lora_model_v8 | AdnaneIsMe | 2025-04-28T13:54:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T13:54:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MaestrAI/elara-lora-1745847739 | MaestrAI | 2025-04-28T13:54:08Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-28T13:42:18Z | # elara LORA Model
This is a LORA model for character Elara
Created at 2025-04-28 15:42:25
|
TKites/finetuned-model-bert-base-uncased | TKites | 2025-04-28T13:52:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-28T13:52:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
selmamimi/flux-model | selmamimi | 2025-04-28T13:51:21Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-28T13:36:56Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Flux Model
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/selmamimi/flux-model/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('selmamimi/flux-model', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/selmamimi/flux-model/discussions) to add images that show off what you’ve made with this LoRA.
|
anrbk/pdf-word | anrbk | 2025-04-28T13:50:27Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T13:50:27Z | ---
license: apache-2.0
---
|
danyush/qwen2.5_vl_3B_virat_lr_r4 | danyush | 2025-04-28T13:49:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-28T13:42:10Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Felladrin/gguf-Q5_K_M-Qwen2.5-0.5B-Instruct | Felladrin | 2025-04-28T13:49:11Z | 2 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | 2024-09-21T14:52:00Z | ---
base_model: Qwen/Qwen2.5-0.5B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Felladrin/Qwen2.5-0.5B-Instruct-Q5_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-0.5B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Felladrin/Qwen2.5-0.5B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-0.5b-instruct-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Felladrin/Qwen2.5-0.5B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-0.5b-instruct-q5_k_m-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Felladrin/Qwen2.5-0.5B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-0.5b-instruct-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Felladrin/Qwen2.5-0.5B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-0.5b-instruct-q5_k_m-imat.gguf -c 2048
```
|
Ridge1999/Stephan_v2_caption | Ridge1999 | 2025-04-28T13:48:10Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-28T13:17:59Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Stephan
---
# Stephan_V2_Caption
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Stephan` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Stephan",
"lora_weights": "https://huggingface.co/Ridge1999/Stephan_v2_caption/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Ridge1999/Stephan_v2_caption', weight_name='lora.safetensors')
image = pipeline('Stephan').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Ridge1999/Stephan_v2_caption/discussions) to add images that show off what you’ve made with this LoRA.
|
ReadyArt/Broken-Tutu-24B-Q4_K_M-GGUF | ReadyArt | 2025-04-28T13:47:58Z | 182 | 1 | null | [
"gguf",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"ERP",
"Erotic",
"Horror",
"Violence",
"text-generation",
"en",
"base_model:ReadyArt/Broken-Tutu-24B",
"base_model:quantized:ReadyArt/Broken-Tutu-24B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-27T04:54:16Z | ---
license: apache-2.0
language:
- en
base_model:
- ReadyArt/Broken-Tutu-24B
base_model_relation: quantized
pipeline_tag: text-generation
tags:
- nsfw
- explicit
- roleplay
- unaligned
- ERP
- Erotic
- Horror
- Violence
---
<style>
strong {
color: #FF1493 !important;
}
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #ffd6e7 0%, #ffc0cb 100%);
color: #ff0077 !important;
text-shadow: 0 0 3px rgba(255, 192, 203, 0.7);
margin: 0;
padding: 20px;
transition: all 0.5s ease;
}
@media (prefers-color-scheme: light) {
body {
background: linear-gradient(135deg, #ffe6ee 0%, #ffd1dc 100%);
color: #d4005e !important;
text-shadow: 0 0 3px rgba(255, 255, 255, 0.7);
}
}
.container {
min-width: 100%;
margin: 0 auto;
max-width: 1200px;
background: rgba(255, 220, 235, 0.95);
border-radius: 12px;
padding: 30px;
box-shadow: 0 0 20px rgba(255, 105, 180, 0.1);
border: 1px solid rgba(255, 20, 147, 0.2);
position: relative;
overflow: hidden;
}
.container::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(255, 105, 180, 0.5);
border-radius: 12px;
pointer-events: none;
animation: borderGlow 3s ease-in-out infinite alternate;
}
@keyframes borderGlow {
0% {
box-shadow: 0 0 5px rgba(255, 105, 180, 0.3);
border-color: rgba(255, 105, 180, 0.5);
}
50% {
box-shadow: 0 0 15px rgba(255, 0, 127, 0.3);
border-color: rgba(255, 0, 127, 0.5);
}
100% {
box-shadow: 0 0 5px rgba(255, 105, 180, 0.3);
border-color: rgba(255, 105, 180, 0.5);
}
}
.header {
text-align: center;
margin-bottom: 30px;
position: relative;
}
.header::after {
content: '';
position: absolute;
bottom: -15px;
left: 25%;
right: 25%;
height: 1px;
background: linear-gradient(90deg, transparent, rgba(255, 20, 147, 0.5), transparent);
animation: scanline 8s linear infinite;
display: none;
}
.model-name {
color: #ff1493;
font-size: 2.5em;
text-shadow: 0 0 15px rgba(255, 20, 147, 0.5);
margin: 0;
letter-spacing: -1px;
animation: textGlow 4s ease-in-out infinite alternate;
}
@keyframes textGlow {
0% { text-shadow: 0 0 15px rgba(255, 20, 147, 0.5); }
50% { text-shadow: 0 0 20px rgba(255, 0, 127, 0.5); }
100% { text-shadow: 0 0 15px rgba(255, 20, 147, 0.5); }
}
.subtitle {
color: #ff69b4;
font-size: 1.2em;
margin-top: 10px;
animation: subtitleFade 6s ease-in-out infinite;
}
.waifu-container {
margin: 20px -30px;
width: calc(100% + 60px);
overflow: hidden;
border-radius: 8px;
border: 1px solid rgba(255, 105, 180, 0.3);
position: relative;
}
.waifu-container::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(45deg,
rgba(255, 105, 180, 0.1) 0%,
transparent 20%,
transparent 80%,
rgba(255, 0, 127, 0.1) 100%);
pointer-events: none;
animation: gradientSlide 10s linear infinite;
}
.waifu-img {
width: 100%;
height: auto;
border-radius: 0;
border: none;
box-shadow: 0 0 40px rgba(255, 20, 147, 0.2);
transition: transform 0.5s ease;
}
.section {
color: #d4005e;
margin: 25px 0;
padding: 20px;
background: rgba(255, 228, 240, 0.9);
border-radius: 8px;
border: 1px solid rgba(255, 105, 180, 0.15);
position: relative;
transition: all 0.3s ease;
}
.section:hover {
border-color: rgba(255, 0, 127, 0.3);
box-shadow: 0 0 15px rgba(255, 20, 147, 0.1);
}
.section::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(255, 105, 180, 0.3);
border-radius: 8px;
pointer-events: none;
animation: sectionPulse 5s ease-in-out infinite;
}
.section-title {
color: #ff1493;
font-size: 1.8em;
margin-top: 0;
text-shadow: 0 0 5px rgba(255, 20, 147, 0.3);
position: relative;
display: inline-block;
}
.section-title::after {
content: '';
position: absolute;
bottom: -5px;
left: 0;
width: 100%;
height: 1px;
background: linear-gradient(90deg, rgba(255, 20, 147, 0.5), rgba(255, 0, 127, 0.5));
transform: scaleX(0);
transform-origin: left;
transition: transform 0.3s ease;
}
.quant-links {
display: grid;
grid-template-columns: repeat(3, 1fr);
gap: 15px;
margin: 20px 0;
}
.link-card {
padding: 15px;
background: rgba(255, 228, 240, 0.95);
border-radius: 8px;
transition: all 0.3s ease;
border: 1px solid rgba(255, 105, 180, 0.1);
position: relative;
overflow: hidden;
}
.link-card::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
height: 2px;
background: linear-gradient(90deg, rgba(255, 20, 147, 0.5), rgba(255, 0, 127, 0.5));
animation: cardScan 4s linear infinite;
}
.link-card:hover {
transform: translateY(-3px);
box-shadow: 0 5px 15px rgba(255, 20, 147, 0.2);
border-color: rgba(255, 0, 127, 0.3);
}
.link-card h3 {
margin-top: 0;
color: #d4005e !important;
}
.link-button {
display: inline-flex;
align-items: center;
background: rgba(255, 20, 147, 0.1);
color: #d4005e !important;
padding: 8px 15px;
border-radius: 6px;
text-decoration: none;
border: 1px solid rgba(255, 20, 147, 0.3);
margin: 5px 0;
transition: all 0.3s ease;
font-size: 0.95em;
position: relative;
overflow: hidden;
}
.link-button:hover {
background: rgba(255, 20, 147, 0.2);
border-color: rgba(255, 20, 147, 0.5);
transform: translateY(-2px);
box-shadow: 0 4px 12px rgba(255, 20, 147, 0.2);
}
.link-button::after {
content: '→';
margin-left: 8px;
opacity: 0.7;
transition: all 0.3s ease;
}
.button-group {
display: flex;
flex-wrap: wrap;
gap: 10px;
margin: 15px 0;
}
.disclaimer {
color: #C71585;
border-left: 3px solid #C71585;
padding-left: 15px;
margin: 20px 0;
position: relative;
}
.disclaimer::before {
content: '⚠️';
position: absolute;
left: -10px;
top: 0;
transform: translateX(-100%);
animation: pulse 2s ease-in-out infinite;
}
.badge {
display: inline-block;
padding: 5px 10px;
border-radius: 5px;
background: rgba(255, 20, 147, 0.1);
border: 1px solid #ff1493;
margin: 5px;
font-size: 0.9em;
animation: badgePulse 3s ease-in-out infinite;
}
/* Light mode adjustments */
@media (prefers-color-scheme: light) {
.container {
background: rgba(255, 240, 245, 0.95);
border-color: rgba(200, 0, 100, 0.3);
}
.model-name, .section-title, .subtitle {
color: #d4005e;
text-shadow: 0 0 5px rgba(255, 0, 127, 0.3);
}
.section {
background: rgba(255, 240, 245, 0.9);
border-color: rgba(200, 0, 100, 0.2);
color: #8b005d;
}
.section p,
.section ul li,
.section > p > strong {
color: #d4005e !important;
}
.link-card {
background: rgba(255, 228, 240, 0.95);
border-color: rgba(200, 0, 100, 0.2);
}
.link-card h3 {
color: #8b005d !important;
}
.link-button {
background: rgba(200, 0, 100, 0.1);
color: #8b005d !important;
border-color: rgba(200, 0, 100, 0.3);
}
.link-button:hover {
background: rgba(200, 0, 100, 0.2);
border-color: rgba(200, 0, 100, 0.5);
}
.disclaimer {
color: #d4005e;
border-color: #d4005e;
}
.badge {
border-color: #d4005e;
background: rgba(200, 0, 100, 0.1);
}
}
/* Code block styling */
.merge-config {
background: rgba(255, 220, 235, 0.95);
border-radius: 8px;
padding: 20px;
box-shadow: 0 0 15px rgba(255, 105, 180, 0.1);
border: 1px solid rgba(255, 20, 147, 0.2);
position: relative;
overflow: hidden;
font-family: 'Courier New', Courier, monospace;
color: #d4005e;
line-height: 1.5;
}
.merge-config::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(255, 105, 180, 0.5);
border-radius: 8px;
pointer-events: none;
animation: borderGlow 3s ease-in-out infinite alternate;
}
.merge-line {
margin: 5px 0;
}
.merge-key {
color: #ff1493;
font-weight: bold;
}
.merge-value {
color: #d4005e;
}
</style>
<div class="container">
<div class="header">
<h1 class="model-name">Broken-Tutu-24B</h1>
</div>
<div class="waifu-container">
<img src="https://i.imgur.com/4wpTnnv.png" class="waifu-img" alt="Broken Tutu Waifu">
</div>
<div class="section">
<h2 class="section-title">🧠 Intelligent Fusion</h2>
<p>This model combines five powerful architectures with precision:</p>
<ul>
<li>⚡ <strong>ReadyArt/The-Omega-Directive-M-24B-v1.1</strong> - Core intelligence (20% weight)</li>
<li>🎭 <strong>ReadyArt/Omega-Darker_The-Final-Directive-24B</strong> - Narrative depth (20% weight)</li>
<li>💡 <strong>ReadyArt/Forgotten-Safeword-24B</strong> - Creative flexibility (20% weight)</li>
<li>🔥 <strong>TroyDoesAI/BlackSheep-24B</strong> - Dark brilliance (20% weight)</li>
<li>🧩 <strong>TheDrummer/Cydonia-24B-v2</strong> - Structural coherence (20% weight)</li>
</ul>
<div class="merge-config">
<div class="merge-line"><span class="merge-key">merge_method:</span> <span class="merge-value">dare_ties</span></div>
<div class="merge-line"><span class="merge-key">base_model:</span> <span class="merge-value">ReadyArt/The-Omega-Directive-M-24B-v1.1</span></div>
<div class="merge-line"><span class="merge-key">models:</span></div>
<div class="merge-line"><span class="merge-key"> - model:</span> <span class="merge-value">ReadyArt/The-Omega-Directive-M-24B-v1.1</span></div>
<div class="merge-line"><span class="merge-key"> parameters:</span></div>
<div class="merge-line"><span class="merge-key"> weight:</span> <span class="merge-value">0.2</span></div>
<div class="merge-line"><span class="merge-key"> - model:</span> <span class="merge-value">ReadyArt/Omega-Darker_The-Final-Directive-24B</span></div>
<div class="merge-line"><span class="merge-key"> parameters:</span></div>
<div class="merge-line"><span class="merge-key"> weight:</span> <span class="merge-value">0.2</span></div>
<div class="merge-line"><span class="merge-key"> - model:</span> <span class="merge-value">ReadyArt/Forgotten-Safeword-24B</span></div>
<div class="merge-line"><span class="merge-key"> parameters:</span></div>
<div class="merge-line"><span class="merge-key"> weight:</span> <span class="merge-value">0.2</span></div>
<div class="merge-line"><span class="merge-key"> - model:</span> <span class="merge-value">TroyDoesAI/BlackSheep-24B</span></div>
<div class="merge-line"><span class="merge-key"> parameters:</span></div>
<div class="merge-line"><span class="merge-key"> weight:</span> <span class="merge-value">0.2</span></div>
<div class="merge-line"><span class="merge-key"> - model:</span> <span class="merge-value">TheDrummer/Cydonia-24B-v2</span></div>
<div class="merge-line"><span class="merge-key"> parameters:</span></div>
<div class="merge-line"><span class="merge-key"> weight:</span> <span class="merge-value">0.2</span></div>
<div class="merge-line"><span class="merge-key">parameters:</span></div>
<div class="merge-line"><span class="merge-key"> density:</span> <span class="merge-value">0.3</span></div>
<div class="merge-line"><span class="merge-key">tokenizer:</span></div>
<div class="merge-line"><span class="merge-key"> source:</span> <span class="merge-value">union</span></div>
<div class="merge-line"><span class="merge-key">chat_template:</span> <span class="merge-value">auto</span></div>
</div>
</div>
<div class="section">
<h2 class="section-title">Performance</h2>
<ul>
<li><strong>Coherent Multi-Character Tracking:</strong> Maintains distinct personalities, speech patterns, and relationships across multiple characters in extended roleplays</li>
<li><strong>Intelligent Responses:</strong> Provides thoughtful, context-aware replies that demonstrate understanding of complex scenarios</li>
<li><strong>Diverse Outputs:</strong> Generates varied responses to identical prompts, avoiding repetition</li>
<li><strong>NSFW Capabilities:</strong> Handles explicit content with detailed descriptions when appropriate</li>
<li><strong>Narrative Consistency:</strong> Maintains plot continuity across long sessions</li>
</ul>
</div>
<div class="section shifty-section">
<h2 class="section-title">⚙️ Technical Specifications</h2>
<p><strong>Recommended Settings:</strong> <a href="https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-T5-XML" class="link-button">Mistral-V7-Tekken-T5-XML with Control Protocol</a></p>
<div class="quant-links">
<div class="link-card">
<h3>GGUF</h3>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-GGUF" class="link-button">Quants</a>
</div>
<div class="link-card">
<h3>imatrix</h3>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-i1-GGUF" class="link-button">Quants</a>
</div>
<div class="link-card">
<h3>EXL2</h3>
<a href="https://huggingface.co/collections/ReadyArt/broken-tutu-24b-exl2-680f866194813593aeaf5637" class="link-button">Quants</a>
</div>
</div>
</div>
<div class="section">
<h2 class="section-title">⚠️ Ethical Considerations</h2>
<div class="disclaimer">
<p>This model maintains The-Omega-Directive's complete lack of alignment:</p>
<ul>
<li>🚫 No moral or ethical constraints on generated content</li>
<li>🔞 Will enthusiastically engage with any NSFW scenario, no matter how depraved</li>
<li>💀 May generate content that requires industrial-grade brain bleach</li>
</ul>
</div>
</div>
<div class="section shifty-section">
<h2 class="section-title">📜 Performance Notes</h2>
<ul>
<li>🔥 Maintains signature intensity with improved narrative flow during explicit scenes</li>
<li>📖 Handles multi-character orgies with improved consistency</li>
<li>🧠 Excels at long-form smut without losing track of plot threads</li>
<li>⚡ Noticeably better at following complex kink instructions than previous versions</li>
<li>🎭 Responds to subtle prompt nuances like a mind reader with a porn addiction</li>
</ul>
</div>
<div class="section remember-this">
<h2 class="section-title">🧑🔬 Model Authors</h2>
<ul>
<li>TheDrummer (Cydonia Model Architect)</li>
<li>TroyDoesAI (BlackSheep Architect)</li>
<li>SteelSkull (Dataset Generation Contributor)</li>
<li>sleepdeprived3 (Omega / Safeword)</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">☕ Support the Architects</h2>
<div class="button-group">
<a href="https://ko-fi.com/thedrummer" class="link-button">TheDrummer's Kofi</a>
<a href="https://ko-fi.com/steelskull" class="link-button">SteelSkull's Kofi</a>
<a href="https://discord.com/invite/Nbv9pQ88Xb" class="link-button">Beaver AI Discord</a>
</div>
</div>
<div class="section">
<h2 class="section-title">🔖 License</h2>
<p>By using this model, you agree:</p>
<ul>
<li>To accept full responsibility for all generated content</li>
<li>That you're at least 18+ years old</li>
<li>That the architects bear no responsibility for your corruption</li>
</ul>
</div>
</div>
|
Madi7a/llama2-7B-Fine-tunedByMAD | Madi7a | 2025-04-28T13:46:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"endpoints_compatible",
"region:us"
] | null | 2025-04-25T13:16:34Z | ---
base_model: meta-llama/Llama-2-7b-hf
library_name: transformers
model_name: llama2-7B-Fine-tunedByMAD
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for llama2-7B-Fine-tunedByMAD
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Madi7a/llama2-7B-Fine-tunedByMAD", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.1
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
BootesVoid/cm92u8ajv0000yy2pmswoj77u_cma13wc0p009i12tvp1k3l1di | BootesVoid | 2025-04-28T13:44:26Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-28T13:44:24Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: GRMGRL
---
# Cm92U8Ajv0000Yy2Pmswoj77U_Cma13Wc0P009I12Tvp1K3L1Di
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `GRMGRL` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "GRMGRL",
"lora_weights": "https://huggingface.co/BootesVoid/cm92u8ajv0000yy2pmswoj77u_cma13wc0p009i12tvp1k3l1di/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cm92u8ajv0000yy2pmswoj77u_cma13wc0p009i12tvp1k3l1di', weight_name='lora.safetensors')
image = pipeline('GRMGRL').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cm92u8ajv0000yy2pmswoj77u_cma13wc0p009i12tvp1k3l1di/discussions) to add images that show off what you’ve made with this LoRA.
|
daishen/openfin-3B-ZH-optimal-sft_lxl3129_audit_regulation | daishen | 2025-04-28T13:43:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T10:58:32Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ramu143/finetune_lora_flan | Ramu143 | 2025-04-28T13:40:54Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:google/flan-t5-small",
"base_model:adapter:google/flan-t5-small",
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T09:26:46Z | ---
library_name: peft
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetune_lora_flan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune_lora_flan
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 32.7080
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 31.359 | 1.0 | 1500 | 32.7080 | 1.0 |
| 31.3129 | 2.0 | 3000 | 32.7080 | 1.0 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
KhaledLakhdher/finetuned | KhaledLakhdher | 2025-04-28T13:40:15Z | 12 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-04-21T11:47:40Z | ---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.7128 | 8.3137 | 100 | 0.5822 |
| 0.5591 | 16.6275 | 200 | 0.4858 |
| 0.5278 | 24.9412 | 300 | 0.4698 |
| 0.5222 | 33.3137 | 400 | 0.4682 |
| 0.5103 | 41.6275 | 500 | 0.4653 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
DKSM11/Dekisasmita | DKSM11 | 2025-04-28T13:40:12Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T13:40:12Z | ---
license: apache-2.0
---
|
talha23527/DeepSeek-R1-Distill-Llama-8B | talha23527 | 2025-04-28T13:39:39Z | 36 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"text-classification",
"en",
"base_model:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"base_model:quantized:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-classification | 2025-03-25T06:59:14Z | ---
base_model: unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
pipeline_tag: text-classification
---
# Uploaded model
- **Developed by:** talha23527
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
Triangle104/QwQ-32B-ArliAI-RpR-v3-Q3_K_S-GGUF | Triangle104 | 2025-04-28T13:37:36Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:ArliAI/QwQ-32B-ArliAI-RpR-v3",
"base_model:quantized:ArliAI/QwQ-32B-ArliAI-RpR-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T13:28:48Z | ---
base_model: ArliAI/QwQ-32B-ArliAI-RpR-v3
language:
- en
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
thumbnail: https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/coilCTGeL0OUYr9PA9zna.jpeg
---
# Triangle104/QwQ-32B-ArliAI-RpR-v3-Q3_K_S-GGUF
This model was converted to GGUF format from [`ArliAI/QwQ-32B-ArliAI-RpR-v3`](https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v3) for more details on the model.
---
RpR (RolePlay with Reasoning) is a new series of models from ArliAI. This series builds directly upon the successful dataset curation methodology and training methods developed for the RPMax series.
RpR models use the same curated, deduplicated RP and creative writing
dataset used for RPMax, with a focus on variety to ensure high
creativity and minimize cross-context repetition. Users familiar with
RPMax will recognize the unique, non-repetitive writing style unlike
other finetuned-for-RP models.
With the release of QwQ as the first high performing open-source
reasoning model that can be easily trained, it was clear that the
available instruct and creative writing reasoning datasets contains only
one response per example. This is type of single response dataset used
for training reasoning models causes degraded output quality in long
multi-turn chats. Which is why Arli AI decided to create a real RP model
capable of long multi-turn chat with reasoning.
In order to create RpR, we first had to actually create the reasoning
RP dataset by re-processing our existing known-good RPMax dataset into a
reasoning dataset. This was possible by using the base QwQ Instruct
model itself to create the reasoning process for every turn in the RPMax
dataset conversation examples, which is then further refined in order
to make sure the reasoning is in-line with the actual response examples
from the dataset.
Another important thing to get right is to make sure the model is
trained on examples that present reasoning blocks in the same way as it
encounters it during inference. Which is, never seeing the reasoning
blocks in it's context. In order to do this, the training run was
completed using axolotl with manual template-free segments dataset in
order to make sure that the model is never trained to see the reasoning
block in the context. Just like how the model will be used during
inference time.
The result of training QwQ on this dataset with this method are
consistently coherent and interesting outputs even in long multi-turn RP
chats. This is as far as we know the first true correctly-trained
reasoning model trained for RP and creative writing.
You can access the model at https://arliai.com and we also have a models ranking page at https://www.arliai.com/models-ranking
Ask questions in our new Discord Server https://discord.com/invite/t75KbPgwhk or on our subreddit https://www.reddit.com/r/ArliAI/
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v3-Q3_K_S-GGUF --hf-file qwq-32b-arliai-rpr-v3-q3_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v3-Q3_K_S-GGUF --hf-file qwq-32b-arliai-rpr-v3-q3_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v3-Q3_K_S-GGUF --hf-file qwq-32b-arliai-rpr-v3-q3_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v3-Q3_K_S-GGUF --hf-file qwq-32b-arliai-rpr-v3-q3_k_s.gguf -c 2048
```
|
mlfoundations-dev/c1_math_0d_1s_3k | mlfoundations-dev | 2025-04-28T13:36:11Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T14:44:38Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: c1_math_0d_1s_3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# c1_math_0d_1s_3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/c1_math_0d_1s_3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Sofia-gb/fashionSigLIP-roturas14 | Sofia-gb | 2025-04-28T13:36:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] | feature-extraction | 2025-04-28T13:35:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
danyush/qwen2.5_vl_3b_virat_shuffled_r4 | danyush | 2025-04-28T13:35:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-28T13:30:43Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tamewild/3b_v5_adapter | tamewild | 2025-04-28T13:29:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T13:27:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
EunjiChoi/train_250428 | EunjiChoi | 2025-04-28T13:29:37Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T13:28:12Z | ---
license: apache-2.0
---
|
Ederson13/donut-cord-v2-menu-sample-demo | Ederson13 | 2025-04-28T13:28:33Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-28T11:15:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ail-sa/rahul_test | ail-sa | 2025-04-28T13:26:26Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-28T12:54:47Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Sid
---
# Rahul_Test
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Sid` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Sid",
"lora_weights": "https://huggingface.co/ail-sa/rahul_test/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ail-sa/rahul_test', weight_name='lora.safetensors')
image = pipeline('Sid').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ail-sa/rahul_test/discussions) to add images that show off what you’ve made with this LoRA.
|
riva7/Hjj | riva7 | 2025-04-28T13:24:30Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-04-28T13:24:30Z | ---
license: other
license_name: hh
license_link: LICENSE
---
|
prashantbhoutika1989/flux-dev-lora | prashantbhoutika1989 | 2025-04-28T13:23:33Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-28T12:49:47Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Prashant
---
# Flux Dev Lora
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Prashant` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Prashant",
"lora_weights": "https://huggingface.co/prashantbhoutika1989/flux-dev-lora/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('prashantbhoutika1989/flux-dev-lora', weight_name='lora.safetensors')
image = pipeline('Prashant').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1500
- Learning rate: 0.0004
- LoRA rank: 60
## Contribute your own examples
You can use the [community tab](https://huggingface.co/prashantbhoutika1989/flux-dev-lora/discussions) to add images that show off what you’ve made with this LoRA.
|
John6666/pvc-style-model-movable-figure-model-pony-piano-mix-v10-sdxl | John6666 | 2025-04-28T13:21:08Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"PVC figure",
"PVC",
"figure",
"3D",
"style",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-04-28T13:14:51Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- PVC figure
- PVC
- figure
- 3D
- style
- pony
---
Original model is [here](https://civitai.com/models/1520074/pvc-style-modelmovable-figure-model-pony-pianomix?modelVersionId=1719779).
This model created by [wagalipagirl](https://civitai.com/user/wagalipagirl).
|
Triangle104/Athena-3.5-7B-Q8_0-GGUF | Triangle104 | 2025-04-28T13:20:01Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"base_model:Spestly/Athena-3.5-7B",
"base_model:quantized:Spestly/Athena-3.5-7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T13:18:20Z | ---
base_model: Spestly/Athena-3.5-7B
library_name: transformers
tags:
- unsloth
- trl
- sft
- llama-cpp
- gguf-my-repo
---
# Triangle104/Athena-3.5-7B-Q8_0-GGUF
This model was converted to GGUF format from [`Spestly/Athena-3.5-7B`](https://huggingface.co/Spestly/Athena-3.5-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Spestly/Athena-3.5-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Athena-3.5-7B-Q8_0-GGUF --hf-file athena-3.5-7b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Athena-3.5-7B-Q8_0-GGUF --hf-file athena-3.5-7b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Athena-3.5-7B-Q8_0-GGUF --hf-file athena-3.5-7b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Athena-3.5-7B-Q8_0-GGUF --hf-file athena-3.5-7b-q8_0.gguf -c 2048
```
|
Lambent/qwen2.5-14B-alternate-instruct-slerp | Lambent | 2025-04-28T13:19:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Lambent/alternate-instruct-qwen2.5-14B",
"base_model:merge:Lambent/alternate-instruct-qwen2.5-14B",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:merge:Qwen/Qwen2.5-14B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-23T17:11:49Z | ---
base_model:
- Qwen/Qwen2.5-14B-Instruct
- Lambent/alternate-instruct-qwen2.5-14B
library_name: transformers
tags:
- mergekit
- merge
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# qwenselfinstructalt
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
Same idea as Lambent/qwen2.5-14B-selfmerge-A, but training the base model on an ~20M token instruct and continued pretraining dataset first.
Hope is the lightweight instruction tuning might add some synergy with the original instruct.
Testing: eq-bench showed no syntax errors and result was 75.6984, closer to original instruct value of 76.9195 than selfmerge-A (which had 73.8068).
Subsets of mrfakename/Capybara-ShareGPT, abacusai/SystemChat-1.1, anthracite-org/nopm_claude_writing_fixed and fineweb-edu were used for the alternate training.
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
* [Lambent/alternate-instruct-qwen2.5-14B](https://huggingface.co/Lambent/alternate-instruct-qwen2.5-14B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Lambent/alternate-instruct-qwen2.5-14B
merge_method: slerp
base_model: Qwen/Qwen2.5-14B-Instruct
parameters:
t:
- value: [0, 0, 0.3, 0.4, 0.5, 0.6, 0.5, 0.4, 0.3, 0, 0]
dtype: bfloat16
```
|
Triangle104/Athena-3.5-7B-Q6_K-GGUF | Triangle104 | 2025-04-28T13:17:13Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"base_model:Spestly/Athena-3.5-7B",
"base_model:quantized:Spestly/Athena-3.5-7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T13:16:43Z | ---
base_model: Spestly/Athena-3.5-7B
library_name: transformers
tags:
- unsloth
- trl
- sft
- llama-cpp
- gguf-my-repo
---
# Triangle104/Athena-3.5-7B-Q6_K-GGUF
This model was converted to GGUF format from [`Spestly/Athena-3.5-7B`](https://huggingface.co/Spestly/Athena-3.5-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Spestly/Athena-3.5-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Athena-3.5-7B-Q6_K-GGUF --hf-file athena-3.5-7b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Athena-3.5-7B-Q6_K-GGUF --hf-file athena-3.5-7b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Athena-3.5-7B-Q6_K-GGUF --hf-file athena-3.5-7b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Athena-3.5-7B-Q6_K-GGUF --hf-file athena-3.5-7b-q6_k.gguf -c 2048
```
|
ApacheOne/WAN_loRAs | ApacheOne | 2025-04-28T12:27:01Z | 0 | 0 | null | [
"safetensors",
"custom",
"art",
"region:us"
] | null | 2025-04-28T11:22:28Z | ---
tags:
- art
---
## LoRAs
+ A group of LoRAs for SOTA WAN2.1 CKPT models
+ ALL loRAs have markdown files with info from the author of the model.
### Ex.
#### ALL have
- Trigger words
- Authors nicknames
- Base model type
- Description
- Version
- Links to more info
#### Some have
- T2V(Text : Video) | T2I(Text : Image)
- I2V(Image : Video) | I2I(Image : Image)
- V2V(Video : Video)
## Community information
All models listed are for continuing the great open source push of GEN AI
## Thanks
- Authors and Brains behind these models and info
- Hosting and Sharing platforms
# TODOs
- Filter out none WAN2.1 models
- CAT. by LoRA type |
Amfavoured247/powo | Amfavoured247 | 2025-04-28T12:25:07Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-04-28T12:25:02Z | ---
license: other
license_name: praiseotah
license_link: LICENSE
---
|
Subsets and Splits