modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-25 12:29:04
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 495
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-25 12:27:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
theprint/PyRe-3B-v1-GGUF | theprint | 2025-02-26T04:36:44Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-25T16:45:50Z | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** theprint
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
twocode/qwen2.5-3b-sft-mp-task-0226 | twocode | 2025-02-26T04:33:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-02-26T04:33:22Z | ---
base_model: unsloth/qwen2.5-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** twocode
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mohammadsa92/tinyzebra3 | mohammadsa92 | 2025-02-26T04:32:33Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-26T04:32:04Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
daniel40/a5f197d1-dad1-4e94-9891-94b30da4f118 | daniel40 | 2025-02-26T04:30:18Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:adapter:Korabbit/llama-2-ko-7b",
"region:us"
] | null | 2025-02-26T04:30:06Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: Korabbit/llama-2-ko-7b
model-index:
- name: daniel40/a5f197d1-dad1-4e94-9891-94b30da4f118
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# daniel40/a5f197d1-dad1-4e94-9891-94b30da4f118
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3413
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Dabliou/upgrade | Dabliou | 2025-02-26T04:30:08Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-02-26T04:22:38Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- flux
- diffusers
- text-to-image
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
---
# LoRA FLUX Model
Custom LoRA adapter trained on FLUX.1-dev architecture via Replicate.
|
Dabliou/swpool | Dabliou | 2025-02-26T04:30:07Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-02-26T04:28:04Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- flux
- diffusers
- text-to-image
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
---
# LoRA FLUX Model
Custom LoRA adapter trained on FLUX.1-dev architecture via Replicate.
|
Dabliou/sucfin | Dabliou | 2025-02-26T04:30:05Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-02-26T04:27:08Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- flux
- diffusers
- text-to-image
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
---
# LoRA FLUX Model
Custom LoRA adapter trained on FLUX.1-dev architecture via Replicate.
|
Dabliou/slift | Dabliou | 2025-02-26T04:30:02Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-02-26T04:24:33Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- flux
- diffusers
- text-to-image
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
---
# LoRA FLUX Model
Custom LoRA adapter trained on FLUX.1-dev architecture via Replicate.
|
Dabliou/shown | Dabliou | 2025-02-26T04:30:00Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-02-26T04:27:49Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- flux
- diffusers
- text-to-image
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
---
# LoRA FLUX Model
Custom LoRA adapter trained on FLUX.1-dev architecture via Replicate.
|
Dabliou/ppeel | Dabliou | 2025-02-26T04:29:59Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-02-26T04:24:23Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- flux
- diffusers
- text-to-image
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
---
# LoRA FLUX Model
Custom LoRA adapter trained on FLUX.1-dev architecture via Replicate.
|
Dabliou/hspc | Dabliou | 2025-02-26T04:29:49Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-02-26T04:25:27Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- flux
- diffusers
- text-to-image
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
---
# LoRA FLUX Model
Custom LoRA adapter trained on FLUX.1-dev architecture via Replicate.
|
Dabliou/handj | Dabliou | 2025-02-26T04:29:47Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-02-26T04:24:14Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- flux
- diffusers
- text-to-image
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
---
# LoRA FLUX Model
Custom LoRA adapter trained on FLUX.1-dev architecture via Replicate.
|
Dabliou/flashfex | Dabliou | 2025-02-26T04:29:46Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-02-26T04:27:28Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- flux
- diffusers
- text-to-image
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
---
# LoRA FLUX Model
Custom LoRA adapter trained on FLUX.1-dev architecture via Replicate.
|
Dabliou/flashf | Dabliou | 2025-02-26T04:29:44Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-02-26T04:25:11Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- flux
- diffusers
- text-to-image
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
---
# LoRA FLUX Model
Custom LoRA adapter trained on FLUX.1-dev architecture via Replicate.
|
Dabliou/bsize4 | Dabliou | 2025-02-26T04:29:42Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-02-26T04:22:19Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- flux
- diffusers
- text-to-image
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
---
# LoRA FLUX Model
Custom LoRA adapter trained on FLUX.1-dev architecture via Replicate.
|
theprint/PyRe-3B-v1-Lora | theprint | 2025-02-26T04:29:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-26T04:29:28Z | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** theprint
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Dabliou/boreal | Dabliou | 2025-02-26T04:29:38Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-02-26T04:23:03Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- flux
- diffusers
- text-to-image
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
---
# LoRA FLUX Model
Custom LoRA adapter trained on FLUX.1-dev architecture via Replicate.
|
DavidAU/LORA-DeepSeek-R1-Distill-Llama-8B-rank-128-INSTRUCT-adapter | DavidAU | 2025-02-26T04:29:25Z | 0 | 0 | peft | [
"peft",
"safetensors",
"deepseek",
"reasoning",
"thinking",
"Llama 3.1 Lora",
"Llama 3 Lora",
"Lora",
"Lora adapter",
"128k context",
"general usage",
"problem solving",
"brainstorming",
"solve riddles",
"mergekit",
"adapter",
"text-generation",
"en",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-02-26T02:16:21Z | ---
license: apache-2.0
library_name: peft
language:
- en
tags:
- deepseek
- reasoning
- thinking
- Llama 3.1 Lora
- Llama 3 Lora
- Lora
- Lora adapter
- 128k context
- general usage
- problem solving
- brainstorming
- solve riddles
- mergekit
- adapter
- peft
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Llama-8B
pipeline_tag: text-generation
---
<h2>LORA-DeepSeek-R1-Distill-Llama-8B-rank-128-INSTRUCT-adapter</h2>
This is a "LORA" adapter to merge "DeepSeek 8B Distill R1" reasoning / thinking with any Llama 3 or Llama 3.1 model using MERGEKIT.
This version used "Llama-Instruct" during the extraction process, which yields a slightly different "reasoning/thinking" adapter.
Other adapters used "Llama-8b-BASE" during the extraction process.
Note that "higher" rank adapter(s) may work better than lower ones, but might also overwrite/change parts of the model you do not want
changed. Testing a new model with more that one rank of adapter is suggested to get best results.
Also for this specific adapter, there are suggested "System Prompts" below to activate reasoning/thinking at the bottom of this page.
Your results will vary based on the model(s) you merge this adapter with.
<B>HOW TO MERGE THIS ADAPTER:</b>
You can use Mergekit "Colab" and/or Mergekit installed locally.
[ https://colab.research.google.com/github/mlabonne/llm-course/blob/main/Mergekit.ipynb ]
[ https://github.com/arcee-ai/mergekit ]
If you are doing multiple merges / steps in your merge, it is suggested you do this step LAST to ensure the adapter works correctly.
Here are some suggested "simple" methods to merge the adapter with a model.
<B>Method - Dare TIES:</B>
<pre>
models:
- model: REPO/MODEL-NAME+DavidAU/mergeadapter
parameters:
weight: 1
merge_method: dare_ties
base_model: REPO/MODEL-NAME+DavidAU/mergeadapter
dtype: bfloat16
tokenizer_source: REPO/MODEL-NAME+DavidAU/mergeadapter
</pre>
<B>Method - Pass Through:</b>
<pre>
base_model: REPO/MODEL-NAME+DavidAU/mergeadapter
dtype: bfloat16
merge_method: passthrough
models:
- model: REPO/MODEL-NAME+DavidAU/mergeadapter
tokenizer_source: REPO/MODEL-NAME+DavidAU/mergeadapter
</pre>
Replace "REPO/MODEL-NAME" with the model to merge the adapter with.
Replace "DavidAU/mergeadapter" with the adapter you want to merge with the model.
IMPORTANT: Note "+" - this is critical.
If you are using merge kit locally, you can still use the format above and Mergekit will download the model and adapter for you.
If you have downloaded the model(s) and adapter(s) you need to change the format to your local file system.
<B>Example Merge for Local Usage: </B>
<pre>
mergekit-yaml --lora-merge-cache HUGGING CACHE --copy-tokenizer --allow-crimes --cuda --out-shard-size 5B --lazy-unpickle --clone-tensors MERGEFILE SAVE-MERGE-TO
</pre>
---
<B>System Role / System Prompt - Augment The Model's Power:</b>
---
If you set / have a system prompt this will affect both "generation" and "thinking/reasoning".
SIMPLE:
This is the generic system prompt used for generation and testing:
<PRE>
You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.
</PRE>
This System Role/Prompt will give you "basic thinking/reasoning":
<PRE>
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
</PRE>
ADVANCED:
Logical and Creative - these will SIGNFICANTLY alter the output, and many times improve it too.
This will also cause more thoughts, deeper thoughts, and in many cases more detailed/stronger thoughts too.
Keep in mind you may also want to test the model with NO system prompt at all - including the default one.
Special Credit to: Eric Hartford, Cognitivecomputations ; these are based on his work.
CRITICAL:
Copy and paste exactly as shown, preserve formatting and line breaks.
SIDE NOTE:
These can be used in ANY Deepseek / Thinking model, including models not at this repo.
These, if used in a "non thinking" model, will also alter model performance too.
<PRE>
You are an AI assistant developed by the world wide community of ai experts.
Your primary directive is to provide well-reasoned, structured, and extensively detailed responses.
Formatting Requirements:
1. Always structure your replies using: <think>{reasoning}</think>{answer}
2. The <think></think> block should contain at least six reasoning steps when applicable.
3. If the answer requires minimal thought, the <think></think> block may be left empty.
4. The user does not see the <think></think> section. Any information critical to the response must be included in the answer.
5. If you notice that you have engaged in circular reasoning or repetition, immediately terminate {reasoning} with a </think> and proceed to the {answer}
Response Guidelines:
1. Detailed and Structured: Use rich Markdown formatting for clarity and readability.
2. Scientific and Logical Approach: Your explanations should reflect the depth and precision of the greatest scientific minds.
3. Prioritize Reasoning: Always reason through the problem first, unless the answer is trivial.
4. Concise yet Complete: Ensure responses are informative, yet to the point without unnecessary elaboration.
5. Maintain a professional, intelligent, and analytical tone in all interactions.
</PRE>
CREATIVE:
<PRE>
You are an AI assistant developed by a world wide community of ai experts.
Your primary directive is to provide highly creative, well-reasoned, structured, and extensively detailed responses.
Formatting Requirements:
1. Always structure your replies using: <think>{reasoning}</think>{answer}
2. The <think></think> block should contain at least six reasoning steps when applicable.
3. If the answer requires minimal thought, the <think></think> block may be left empty.
4. The user does not see the <think></think> section. Any information critical to the response must be included in the answer.
5. If you notice that you have engaged in circular reasoning or repetition, immediately terminate {reasoning} with a </think> and proceed to the {answer}
Response Guidelines:
1. Detailed and Structured: Use rich Markdown formatting for clarity and readability.
2. Creative and Logical Approach: Your explanations should reflect the depth and precision of the greatest creative minds first.
3. Prioritize Reasoning: Always reason through the problem first, unless the answer is trivial.
4. Concise yet Complete: Ensure responses are informative, yet to the point without unnecessary elaboration.
5. Maintain a professional, intelligent, and analytical tone in all interactions.
</PRE> |
DavidAU/LORA-DeepSeek-R1-Distill-Llama-8B-rank-64-INSTRUCT-adapter | DavidAU | 2025-02-26T04:29:10Z | 0 | 0 | peft | [
"peft",
"safetensors",
"deepseek",
"reasoning",
"thinking",
"Llama 3.1 Lora",
"Llama 3 Lora",
"Lora",
"Lora adapter",
"128k context",
"general usage",
"problem solving",
"brainstorming",
"solve riddles",
"mergekit",
"adapter",
"text-generation",
"en",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-02-26T01:55:39Z | ---
license: apache-2.0
library_name: peft
language:
- en
tags:
- deepseek
- reasoning
- thinking
- Llama 3.1 Lora
- Llama 3 Lora
- Lora
- Lora adapter
- 128k context
- general usage
- problem solving
- brainstorming
- solve riddles
- mergekit
- adapter
- peft
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Llama-8B
pipeline_tag: text-generation
---
<h2>LORA-DeepSeek-R1-Distill-Llama-8B-rank-64-INSTRUCT-adapter</h2>
This is a "LORA" adapter to merge "DeepSeek 8B Distill R1" reasoning / thinking with any Llama 3 or Llama 3.1 model using MERGEKIT.
This version used "Llama-Instruct" during the extraction process, which yields a slightly different "reasoning/thinking" adapter.
Other adapters used "Llama-8b-BASE" during the extraction process.
Note that "higher" rank adapter(s) may work better than lower ones, but might also overwrite/change parts of the model you do not want
changed. Testing a new model with more that one rank of adapter is suggested to get best results.
Also for this specific adapter, there are suggested "System Prompts" below to activate reasoning/thinking at the bottom of this page.
Your results will vary based on the model(s) you merge this adapter with.
<B>HOW TO MERGE THIS ADAPTER:</b>
You can use Mergekit "Colab" and/or Mergekit installed locally.
[ https://colab.research.google.com/github/mlabonne/llm-course/blob/main/Mergekit.ipynb ]
[ https://github.com/arcee-ai/mergekit ]
If you are doing multiple merges / steps in your merge, it is suggested you do this step LAST to ensure the adapter works correctly.
Here are some suggested "simple" methods to merge the adapter with a model.
<B>Method - Dare TIES:</B>
<pre>
models:
- model: REPO/MODEL-NAME+DavidAU/mergeadapter
parameters:
weight: 1
merge_method: dare_ties
base_model: REPO/MODEL-NAME+DavidAU/mergeadapter
dtype: bfloat16
tokenizer_source: REPO/MODEL-NAME+DavidAU/mergeadapter
</pre>
<B>Method - Pass Through:</b>
<pre>
base_model: REPO/MODEL-NAME+DavidAU/mergeadapter
dtype: bfloat16
merge_method: passthrough
models:
- model: REPO/MODEL-NAME+DavidAU/mergeadapter
tokenizer_source: REPO/MODEL-NAME+DavidAU/mergeadapter
</pre>
Replace "REPO/MODEL-NAME" with the model to merge the adapter with.
Replace "DavidAU/mergeadapter" with the adapter you want to merge with the model.
IMPORTANT: Note "+" - this is critical.
If you are using merge kit locally, you can still use the format above and Mergekit will download the model and adapter for you.
If you have downloaded the model(s) and adapter(s) you need to change the format to your local file system.
<B>Example Merge for Local Usage: </B>
<pre>
mergekit-yaml --lora-merge-cache HUGGING CACHE --copy-tokenizer --allow-crimes --cuda --out-shard-size 5B --lazy-unpickle --clone-tensors MERGEFILE SAVE-MERGE-TO
</pre>
---
<B>System Role / System Prompt - Augment The Model's Power:</b>
---
If you set / have a system prompt this will affect both "generation" and "thinking/reasoning".
SIMPLE:
This is the generic system prompt used for generation and testing:
<PRE>
You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.
</PRE>
This System Role/Prompt will give you "basic thinking/reasoning":
<PRE>
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
</PRE>
ADVANCED:
Logical and Creative - these will SIGNFICANTLY alter the output, and many times improve it too.
This will also cause more thoughts, deeper thoughts, and in many cases more detailed/stronger thoughts too.
Keep in mind you may also want to test the model with NO system prompt at all - including the default one.
Special Credit to: Eric Hartford, Cognitivecomputations ; these are based on his work.
CRITICAL:
Copy and paste exactly as shown, preserve formatting and line breaks.
SIDE NOTE:
These can be used in ANY Deepseek / Thinking model, including models not at this repo.
These, if used in a "non thinking" model, will also alter model performance too.
<PRE>
You are an AI assistant developed by the world wide community of ai experts.
Your primary directive is to provide well-reasoned, structured, and extensively detailed responses.
Formatting Requirements:
1. Always structure your replies using: <think>{reasoning}</think>{answer}
2. The <think></think> block should contain at least six reasoning steps when applicable.
3. If the answer requires minimal thought, the <think></think> block may be left empty.
4. The user does not see the <think></think> section. Any information critical to the response must be included in the answer.
5. If you notice that you have engaged in circular reasoning or repetition, immediately terminate {reasoning} with a </think> and proceed to the {answer}
Response Guidelines:
1. Detailed and Structured: Use rich Markdown formatting for clarity and readability.
2. Scientific and Logical Approach: Your explanations should reflect the depth and precision of the greatest scientific minds.
3. Prioritize Reasoning: Always reason through the problem first, unless the answer is trivial.
4. Concise yet Complete: Ensure responses are informative, yet to the point without unnecessary elaboration.
5. Maintain a professional, intelligent, and analytical tone in all interactions.
</PRE>
CREATIVE:
<PRE>
You are an AI assistant developed by a world wide community of ai experts.
Your primary directive is to provide highly creative, well-reasoned, structured, and extensively detailed responses.
Formatting Requirements:
1. Always structure your replies using: <think>{reasoning}</think>{answer}
2. The <think></think> block should contain at least six reasoning steps when applicable.
3. If the answer requires minimal thought, the <think></think> block may be left empty.
4. The user does not see the <think></think> section. Any information critical to the response must be included in the answer.
5. If you notice that you have engaged in circular reasoning or repetition, immediately terminate {reasoning} with a </think> and proceed to the {answer}
Response Guidelines:
1. Detailed and Structured: Use rich Markdown formatting for clarity and readability.
2. Creative and Logical Approach: Your explanations should reflect the depth and precision of the greatest creative minds first.
3. Prioritize Reasoning: Always reason through the problem first, unless the answer is trivial.
4. Concise yet Complete: Ensure responses are informative, yet to the point without unnecessary elaboration.
5. Maintain a professional, intelligent, and analytical tone in all interactions.
</PRE>
|
DavidAU/LORA-DeepSeek-R1-Distill-Llama-8B-rank-64-BASE-adapter | DavidAU | 2025-02-26T04:28:54Z | 0 | 0 | peft | [
"peft",
"safetensors",
"deepseek",
"reasoning",
"thinking",
"Llama 3.1 Lora",
"Llama 3 Lora",
"Lora",
"Lora adapter",
"128k context",
"general usage",
"problem solving",
"brainstorming",
"solve riddles",
"mergekit",
"adapter",
"text-generation",
"en",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-02-26T02:11:03Z | ---
license: apache-2.0
library_name: peft
language:
- en
tags:
- deepseek
- reasoning
- thinking
- Llama 3.1 Lora
- Llama 3 Lora
- Lora
- Lora adapter
- 128k context
- general usage
- problem solving
- brainstorming
- solve riddles
- mergekit
- adapter
- peft
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Llama-8B
pipeline_tag: text-generation
---
<h2>LORA-DeepSeek-R1-Distill-Llama-8B-rank-64-BASE-adapter</h2>
This is a "LORA" adapter to merge "DeepSeek 8B Distill R1" reasoning / thinking with any Llama 3 or Llama 3.1 model using MERGEKIT.
This adapter used "Llama-8b-BASE" during the extraction process. There are different adapters that used "Llama 8b Instruct" during
extraction which creates a slightly different "reasoning/thinking adapter" (and "end model").
Note that "higher" rank adapter(s) may work better than lower ones, but might also overwrite/change parts of the model you do not want
changed. Testing a new model with more that one rank of adapter is suggested to get best results.
Also for this specific adapter, there are suggested "System Prompts" below to activate reasoning/thinking at the bottom of this page.
Your results will vary based on the model(s) you merge this adapter with.
<B>HOW TO MERGE THIS ADAPTER:</b>
You can use Mergekit "Colab" and/or Mergekit installed locally.
[ https://colab.research.google.com/github/mlabonne/llm-course/blob/main/Mergekit.ipynb ]
[ https://github.com/arcee-ai/mergekit ]
If you are doing multiple merges / steps in your merge, it is suggested you do this step LAST to ensure the adapter works correctly.
Here are some suggested "simple" methods to merge the adapter with a model.
<B>Method - Dare TIES:</B>
<pre>
models:
- model: REPO/MODEL-NAME+DavidAU/mergeadapter
parameters:
weight: 1
merge_method: dare_ties
base_model: REPO/MODEL-NAME+DavidAU/mergeadapter
dtype: bfloat16
tokenizer_source: REPO/MODEL-NAME+DavidAU/mergeadapter
</pre>
<B>Method - Pass Through:</b>
<pre>
base_model: REPO/MODEL-NAME+DavidAU/mergeadapter
dtype: bfloat16
merge_method: passthrough
models:
- model: REPO/MODEL-NAME+DavidAU/mergeadapter
tokenizer_source: REPO/MODEL-NAME+DavidAU/mergeadapter
</pre>
Replace "REPO/MODEL-NAME" with the model to merge the adapter with.
Replace "DavidAU/mergeadapter" with the adapter you want to merge with the model.
IMPORTANT: Note "+" - this is critical.
If you are using merge kit locally, you can still use the format above and Mergekit will download the model and adapter for you.
If you have downloaded the model(s) and adapter(s) you need to change the format to your local file system.
<B>Example Merge for Local Usage: </B>
<pre>
mergekit-yaml --lora-merge-cache HUGGING CACHE --copy-tokenizer --allow-crimes --cuda --out-shard-size 5B --lazy-unpickle --clone-tensors MERGEFILE SAVE-MERGE-TO
</pre>
---
<B>System Role / System Prompt - Augment The Model's Power:</b>
---
If you set / have a system prompt this will affect both "generation" and "thinking/reasoning".
SIMPLE:
This is the generic system prompt used for generation and testing:
<PRE>
You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.
</PRE>
This System Role/Prompt will give you "basic thinking/reasoning":
<PRE>
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
</PRE>
ADVANCED:
Logical and Creative - these will SIGNFICANTLY alter the output, and many times improve it too.
This will also cause more thoughts, deeper thoughts, and in many cases more detailed/stronger thoughts too.
Keep in mind you may also want to test the model with NO system prompt at all - including the default one.
Special Credit to: Eric Hartford, Cognitivecomputations ; these are based on his work.
CRITICAL:
Copy and paste exactly as shown, preserve formatting and line breaks.
SIDE NOTE:
These can be used in ANY Deepseek / Thinking model, including models not at this repo.
These, if used in a "non thinking" model, will also alter model performance too.
<PRE>
You are an AI assistant developed by the world wide community of ai experts.
Your primary directive is to provide well-reasoned, structured, and extensively detailed responses.
Formatting Requirements:
1. Always structure your replies using: <think>{reasoning}</think>{answer}
2. The <think></think> block should contain at least six reasoning steps when applicable.
3. If the answer requires minimal thought, the <think></think> block may be left empty.
4. The user does not see the <think></think> section. Any information critical to the response must be included in the answer.
5. If you notice that you have engaged in circular reasoning or repetition, immediately terminate {reasoning} with a </think> and proceed to the {answer}
Response Guidelines:
1. Detailed and Structured: Use rich Markdown formatting for clarity and readability.
2. Scientific and Logical Approach: Your explanations should reflect the depth and precision of the greatest scientific minds.
3. Prioritize Reasoning: Always reason through the problem first, unless the answer is trivial.
4. Concise yet Complete: Ensure responses are informative, yet to the point without unnecessary elaboration.
5. Maintain a professional, intelligent, and analytical tone in all interactions.
</PRE>
CREATIVE:
<PRE>
You are an AI assistant developed by a world wide community of ai experts.
Your primary directive is to provide highly creative, well-reasoned, structured, and extensively detailed responses.
Formatting Requirements:
1. Always structure your replies using: <think>{reasoning}</think>{answer}
2. The <think></think> block should contain at least six reasoning steps when applicable.
3. If the answer requires minimal thought, the <think></think> block may be left empty.
4. The user does not see the <think></think> section. Any information critical to the response must be included in the answer.
5. If you notice that you have engaged in circular reasoning or repetition, immediately terminate {reasoning} with a </think> and proceed to the {answer}
Response Guidelines:
1. Detailed and Structured: Use rich Markdown formatting for clarity and readability.
2. Creative and Logical Approach: Your explanations should reflect the depth and precision of the greatest creative minds first.
3. Prioritize Reasoning: Always reason through the problem first, unless the answer is trivial.
4. Concise yet Complete: Ensure responses are informative, yet to the point without unnecessary elaboration.
5. Maintain a professional, intelligent, and analytical tone in all interactions.
</PRE>
|
tuantmdev/db4afd13-01c4-40a2-8b08-b27997cb7ddb | tuantmdev | 2025-02-26T04:28:44Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:scb10x/llama-3-typhoon-v1.5-8b-instruct",
"base_model:adapter:scb10x/llama-3-typhoon-v1.5-8b-instruct",
"license:llama3",
"region:us"
] | null | 2025-02-26T04:01:41Z | ---
library_name: peft
license: llama3
base_model: scb10x/llama-3-typhoon-v1.5-8b-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: db4afd13-01c4-40a2-8b08-b27997cb7ddb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: scb10x/llama-3-typhoon-v1.5-8b-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 58a21b4b5d091123_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/58a21b4b5d091123_train_data.json
type:
field_input: source
field_instruction: text
field_output: completion_a
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: true
hub_model_id: tuantmdev/db4afd13-01c4-40a2-8b08-b27997cb7ddb
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 1e-4
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 40
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 400
micro_batch_size: 2
mlflow_experiment_name: /tmp/58a21b4b5d091123_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
save_strategy: steps
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a5ae3c0d-880d-44ca-aaa3-1f54c6de2baa
wandb_project: Gradients-On-Demand
wandb_run: unknown
wandb_runid: a5ae3c0d-880d-44ca-aaa3-1f54c6de2baa
warmup_steps: 80
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# db4afd13-01c4-40a2-8b08-b27997cb7ddb
This model is a fine-tuned version of [scb10x/llama-3-typhoon-v1.5-8b-instruct](https://huggingface.co/scb10x/llama-3-typhoon-v1.5-8b-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8371
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 80
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0017 | 1 | 1.1433 |
| 0.9509 | 0.0829 | 50 | 0.9083 |
| 0.8463 | 0.1658 | 100 | 0.8985 |
| 0.8865 | 0.2487 | 150 | 0.8859 |
| 0.8127 | 0.3315 | 200 | 0.8730 |
| 0.8346 | 0.4144 | 250 | 0.8549 |
| 0.7965 | 0.4973 | 300 | 0.8423 |
| 0.8515 | 0.5802 | 350 | 0.8406 |
| 0.7886 | 0.6631 | 400 | 0.8371 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
DavidAU/LORA-DeepSeek-R1-Distill-Llama-8B-rank-128-BASE-adapter | DavidAU | 2025-02-26T04:28:20Z | 0 | 0 | peft | [
"peft",
"safetensors",
"deepseek",
"reasoning",
"thinking",
"Llama 3.1 Lora",
"Llama 3 Lora",
"Lora",
"Lora adapter",
"128k context",
"general usage",
"problem solving",
"brainstorming",
"solve riddles",
"mergekit",
"adapter",
"text-generation",
"en",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-02-26T02:20:59Z | ---
license: apache-2.0
library_name: peft
language:
- en
tags:
- deepseek
- reasoning
- thinking
- Llama 3.1 Lora
- Llama 3 Lora
- Lora
- Lora adapter
- 128k context
- general usage
- problem solving
- brainstorming
- solve riddles
- mergekit
- adapter
- peft
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Llama-8B
pipeline_tag: text-generation
---
<h2>LORA-DeepSeek-R1-Distill-Llama-8B-rank-128-BASE-adapter</h2>
This is a "LORA" adapter to merge "DeepSeek 8B Distill R1" reasoning / thinking with any Llama 3 or Llama 3.1 model using MERGEKIT.
This adapter used "Llama-8b-BASE" during the extraction process. There are different adapters that used "Llama 8b Instruct" during
extraction which creates a slightly different "reasoning/thinking adapter" (and "end model").
Note that "higher" rank adapter(s) may work better than lower ones, but might also overwrite/change parts of the model you do not want
changed. Testing a new model with more that one rank of adapter is suggested to get best results.
Also for this specific adapter, there are suggested "System Prompts" below to activate reasoning/thinking at the bottom of this page.
Your results will vary based on the model(s) you merge this adapter with.
<B>HOW TO MERGE THIS ADAPTER:</b>
You can use Mergekit "Colab" and/or Mergekit installed locally.
[ https://colab.research.google.com/github/mlabonne/llm-course/blob/main/Mergekit.ipynb ]
[ https://github.com/arcee-ai/mergekit ]
If you are doing multiple merges / steps in your merge, it is suggested you do this step LAST to ensure the adapter works correctly.
Here are some suggested "simple" methods to merge the adapter with a model.
<B>Method - Dare TIES:</B>
<pre>
models:
- model: REPO/MODEL-NAME+DavidAU/mergeadapter
parameters:
weight: 1
merge_method: dare_ties
base_model: REPO/MODEL-NAME+DavidAU/mergeadapter
dtype: bfloat16
tokenizer_source: REPO/MODEL-NAME+DavidAU/mergeadapter
</pre>
<B>Method - Pass Through:</b>
<pre>
base_model: REPO/MODEL-NAME+DavidAU/mergeadapter
dtype: bfloat16
merge_method: passthrough
models:
- model: REPO/MODEL-NAME+DavidAU/mergeadapter
tokenizer_source: REPO/MODEL-NAME+DavidAU/mergeadapter
</pre>
Replace "REPO/MODEL-NAME" with the model to merge the adapter with.
Replace "DavidAU/mergeadapter" with the adapter you want to merge with the model.
IMPORTANT: Note "+" - this is critical.
If you are using merge kit locally, you can still use the format above and Mergekit will download the model and adapter for you.
If you have downloaded the model(s) and adapter(s) you need to change the format to your local file system.
<B>Example Merge for Local Usage: </B>
<pre>
mergekit-yaml --lora-merge-cache HUGGING CACHE --copy-tokenizer --allow-crimes --cuda --out-shard-size 5B --lazy-unpickle --clone-tensors MERGEFILE SAVE-MERGE-TO
</pre>
---
<B>System Role / System Prompt - Augment The Model's Power:</b>
---
If you set / have a system prompt this will affect both "generation" and "thinking/reasoning".
SIMPLE:
This is the generic system prompt used for generation and testing:
<PRE>
You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.
</PRE>
This System Role/Prompt will give you "basic thinking/reasoning":
<PRE>
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
</PRE>
ADVANCED:
Logical and Creative - these will SIGNFICANTLY alter the output, and many times improve it too.
This will also cause more thoughts, deeper thoughts, and in many cases more detailed/stronger thoughts too.
Keep in mind you may also want to test the model with NO system prompt at all - including the default one.
Special Credit to: Eric Hartford, Cognitivecomputations ; these are based on his work.
CRITICAL:
Copy and paste exactly as shown, preserve formatting and line breaks.
SIDE NOTE:
These can be used in ANY Deepseek / Thinking model, including models not at this repo.
These, if used in a "non thinking" model, will also alter model performance too.
<PRE>
You are an AI assistant developed by the world wide community of ai experts.
Your primary directive is to provide well-reasoned, structured, and extensively detailed responses.
Formatting Requirements:
1. Always structure your replies using: <think>{reasoning}</think>{answer}
2. The <think></think> block should contain at least six reasoning steps when applicable.
3. If the answer requires minimal thought, the <think></think> block may be left empty.
4. The user does not see the <think></think> section. Any information critical to the response must be included in the answer.
5. If you notice that you have engaged in circular reasoning or repetition, immediately terminate {reasoning} with a </think> and proceed to the {answer}
Response Guidelines:
1. Detailed and Structured: Use rich Markdown formatting for clarity and readability.
2. Scientific and Logical Approach: Your explanations should reflect the depth and precision of the greatest scientific minds.
3. Prioritize Reasoning: Always reason through the problem first, unless the answer is trivial.
4. Concise yet Complete: Ensure responses are informative, yet to the point without unnecessary elaboration.
5. Maintain a professional, intelligent, and analytical tone in all interactions.
</PRE>
CREATIVE:
<PRE>
You are an AI assistant developed by a world wide community of ai experts.
Your primary directive is to provide highly creative, well-reasoned, structured, and extensively detailed responses.
Formatting Requirements:
1. Always structure your replies using: <think>{reasoning}</think>{answer}
2. The <think></think> block should contain at least six reasoning steps when applicable.
3. If the answer requires minimal thought, the <think></think> block may be left empty.
4. The user does not see the <think></think> section. Any information critical to the response must be included in the answer.
5. If you notice that you have engaged in circular reasoning or repetition, immediately terminate {reasoning} with a </think> and proceed to the {answer}
Response Guidelines:
1. Detailed and Structured: Use rich Markdown formatting for clarity and readability.
2. Creative and Logical Approach: Your explanations should reflect the depth and precision of the greatest creative minds first.
3. Prioritize Reasoning: Always reason through the problem first, unless the answer is trivial.
4. Concise yet Complete: Ensure responses are informative, yet to the point without unnecessary elaboration.
5. Maintain a professional, intelligent, and analytical tone in all interactions.
</PRE>
|
Aryan-21/fft-sd35-id-81-a | Aryan-21 | 2025-02-26T04:27:52Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-02-26T04:27:46Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: adam
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# FFT_SD35_ID_81_A
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `adam` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
tamewild/test_14b_v6_merged | tamewild | 2025-02-26T04:27:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-14B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-26T04:21:28Z | ---
base_model: unsloth/Qwen2.5-14B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** tamewild
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-14B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TOMFORD79/TCCS9080_CS12 | TOMFORD79 | 2025-02-26T04:26:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-25T16:46:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/MilkDropLM-32b-v0.3-GGUF | mradermacher | 2025-02-26T04:23:18Z | 236 | 0 | transformers | [
"transformers",
"gguf",
"Visualizations",
"MilkDrop",
"unsloth",
"qwen",
"en",
"base_model:InferenceIllusionist/MilkDropLM-32b-v0.3",
"base_model:quantized:InferenceIllusionist/MilkDropLM-32b-v0.3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-24T23:18:54Z | ---
base_model: InferenceIllusionist/MilkDropLM-32b-v0.3
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- Visualizations
- MilkDrop
- unsloth
- qwen
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/InferenceIllusionist/MilkDropLM-32b-v0.3
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-GGUF/resolve/main/MilkDropLM-32b-v0.3.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-GGUF/resolve/main/MilkDropLM-32b-v0.3.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-GGUF/resolve/main/MilkDropLM-32b-v0.3.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-GGUF/resolve/main/MilkDropLM-32b-v0.3.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-GGUF/resolve/main/MilkDropLM-32b-v0.3.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-GGUF/resolve/main/MilkDropLM-32b-v0.3.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-GGUF/resolve/main/MilkDropLM-32b-v0.3.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-GGUF/resolve/main/MilkDropLM-32b-v0.3.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-GGUF/resolve/main/MilkDropLM-32b-v0.3.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-GGUF/resolve/main/MilkDropLM-32b-v0.3.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-GGUF/resolve/main/MilkDropLM-32b-v0.3.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/MS-RP-whole-i1-GGUF | mradermacher | 2025-02-26T04:23:17Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mergekit-community/MS-RP-whole",
"base_model:quantized:mergekit-community/MS-RP-whole",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-26T00:14:21Z | ---
base_model: mergekit-community/MS-RP-whole
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mergekit-community/MS-RP-whole
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MS-RP-whole-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-IQ1_S.gguf) | i1-IQ1_S | 5.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-IQ1_M.gguf) | i1-IQ1_M | 5.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-IQ2_S.gguf) | i1-IQ2_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-IQ2_M.gguf) | i1-IQ2_M | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-Q2_K_S.gguf) | i1-Q2_K_S | 8.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-Q2_K.gguf) | i1-Q2_K | 9.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-IQ3_S.gguf) | i1-IQ3_S | 10.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-IQ3_M.gguf) | i1-IQ3_M | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-Q4_0.gguf) | i1-Q4_0 | 13.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-Q4_1.gguf) | i1-Q4_1 | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-Q5_K_M.gguf) | i1-Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-Q6_K.gguf) | i1-Q6_K | 19.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
dwang5460/360hw4 | dwang5460 | 2025-02-26T04:21:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-26T03:22:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
samoline/43b01f47-ba90-4b7d-b5f6-13e6932be324 | samoline | 2025-02-26T04:21:52Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codellama-7b",
"base_model:adapter:unsloth/codellama-7b",
"license:apache-2.0",
"region:us"
] | null | 2025-02-26T04:18:56Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/codellama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 43b01f47-ba90-4b7d-b5f6-13e6932be324
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/codellama-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0c567cd877a09797_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0c567cd877a09797_train_data.json
type:
field_input: eval_persona
field_instruction: eval_question
field_output: eval_whole_desc
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: false
group_by_length: false
hub_model_id: samoline/43b01f47-ba90-4b7d-b5f6-13e6932be324
hub_repo: samoline
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 4
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 4
lora_target_linear: true
lr_scheduler: cosine
max_steps: 2
micro_batch_size: 1
mlflow_experiment_name: /tmp/0c567cd877a09797_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: samoline-nan
wandb_mode: online
wandb_name: abe6e1f6-efb5-4c71-968b-4b3e59432d27
wandb_project: Gradients-On-Demand
wandb_run: dev
wandb_runid: abe6e1f6-efb5-4c71-968b-4b3e59432d27
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 43b01f47-ba90-4b7d-b5f6-13e6932be324
This model is a fine-tuned version of [unsloth/codellama-7b](https://huggingface.co/unsloth/codellama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 2 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
legwyn/6 | legwyn | 2025-02-26T04:21:13Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"diffusers-training",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:hf-internal-testing/tiny-flux-pipe",
"base_model:finetune:hf-internal-testing/tiny-flux-pipe",
"license:other",
"endpoints_compatible",
"diffusers:FluxPipeline",
"region:us"
] | text-to-image | 2025-02-26T04:17:39Z | ---
base_model: hf-internal-testing/tiny-flux-pipe
library_name: diffusers
license: other
tags:
- text-to-image
- diffusers-training
- diffusers
- flux
- flux-diffusers
- template:sd-lora
- text-to-image
- diffusers-training
- diffusers
- flux
- flux-diffusers
- template:sd-lora
instance_prompt: prompt
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux [dev] DreamBooth - legwyn/6
<Gallery />
## Model description
These are legwyn/6 DreamBooth weights for hf-internal-testing/tiny-flux-pipe.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Flux diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md).
Was the text encoder fine-tuned? False.
## Trigger words
You should use `prompt` to trigger the image generation.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('legwyn/6', torch_dtype=torch.bfloat16).to('cuda')
image = pipeline('prompt').images[0]
```
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
irishprancer/7376d136-309c-4a5c-952f-0e833d5678e9 | irishprancer | 2025-02-26T04:20:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-26T03:24:07Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sae-rad/bugged_scaling_laws_vlm_0.002 | sae-rad | 2025-02-26T04:19:33Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-02-26T04:18:32Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
JayHyeon/Qwen_0.5-DPO_3e-6-1ep_0vpo_const | JayHyeon | 2025-02-26T04:19:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:trl-lib/ultrafeedback_binarized",
"arxiv:2305.18290",
"base_model:JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep",
"base_model:finetune:JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-26T02:11:49Z | ---
base_model: JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep
datasets: trl-lib/ultrafeedback_binarized
library_name: transformers
model_name: Qwen_0.5-DPO_3e-6-1ep_0vpo_const
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Qwen_0.5-DPO_3e-6-1ep_0vpo_const
This model is a fine-tuned version of [JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep](https://huggingface.co/JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/Qwen_0.5-DPO_3e-6-1ep_0vpo_const", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/baa3cne8)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.13.0.dev0
- Transformers: 4.47.0.dev0
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Kuongan/CS221-xlm-roberta-base-som-noaug-finetuned-som-tapt | Kuongan | 2025-02-26T04:19:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:Kuongan/xlm-roberta-base-som-noaug",
"base_model:finetune:Kuongan/xlm-roberta-base-som-noaug",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-02-26T04:12:29Z | ---
library_name: transformers
license: mit
base_model: Kuongan/xlm-roberta-base-som-noaug
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: CS221-xlm-roberta-base-som-noaug-finetuned-som-tapt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS221-xlm-roberta-base-som-noaug-finetuned-som-tapt
This model is a fine-tuned version of [Kuongan/xlm-roberta-base-som-noaug](https://huggingface.co/Kuongan/xlm-roberta-base-som-noaug) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1389
- F1: 0.7193
- Roc Auc: 0.8401
- Accuracy: 0.7427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.163 | 1.0 | 142 | 0.1401 | 0.6650 | 0.8215 | 0.7436 |
| 0.1637 | 2.0 | 284 | 0.1389 | 0.7193 | 0.8401 | 0.7427 |
| 0.1492 | 3.0 | 426 | 0.1447 | 0.6721 | 0.8107 | 0.7312 |
| 0.1194 | 4.0 | 568 | 0.1518 | 0.6364 | 0.7957 | 0.7206 |
| 0.1106 | 5.0 | 710 | 0.1516 | 0.6468 | 0.7956 | 0.7250 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
PokerK/q-FrozenLake-v1-4x4-noSlippery | PokerK | 2025-02-26T04:19:09Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-02-26T04:19:07Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="PokerK/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NeuraLakeAi/iSA-02-Nano-Llama-3.2-1B | NeuraLakeAi | 2025-02-26T04:18:40Z | 0 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"facebook",
"meta",
"reasoning",
"context-dynamic",
"small-models",
"synthetic-data",
"function-calls",
"open-source",
"NeuraLake",
"brazil",
"1B",
"conversational",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-25T13:35:23Z | ---
tags:
- text-generation
- transformers
- facebook
- meta
- pytorch
- reasoning
- context-dynamic
- small-models
- synthetic-data
- function-calls
- open-source
- llama
- NeuraLake
- brazil
- 1B
license: apache-2.0
base_model:
- meta-llama/Llama-3.2-1B
model_creator: Celso H A Diniz
model_name: NeuraLakeAi/iSA-02-Nano-Llama-3.2-1B
---
# NeuraLakeAi/iSA-02-Nano-Llama-3.2-1B (v1.2)
## Overview
The *iSA-02-Nano-Llama-3.2-1B* is a **Base Model** designed for text generation, optimized for reasoning tasks. Based on *meta-llama/Llama-3.2-1B*, this model has been deeply customized by **NeuraLake** and stands out for its ability to work with an extended context window of **1,048,576 tokens**. It was created to allow businesses and developers to fine-tune it for specific tasks that require processing large volumes of information. Designed by NeuraLake using synthetic datasets, the model embodies the philosophy of **"think before you speak,"** enhancing reasoning capabilities for small-scale models.
**โจ Extended Context Window โจ:** The *iSA-02-Nano-Llama-3.2-1B* features an unprecedented context window of **1,048,576 tokens**, enabling the analysis and generation of extremely long and complex texts. This sets a new standard for small yet powerful reasoning models. ๐
## Key Features
- **Extended Context** ๐: Supports up to **1,048,576 tokens**, enabling the analysis and generation of long, complex texts.
- **Advanced Reasoning** ๐ง : Integrates sophisticated reasoning chains for handling complex tasks.
- **Customization** ๐ง: Ideal for businesses seeking to tailor the model to specific tasks, with a robust framework for further fine-tuning and training.
- **Compact Yet Powerful** ๐ก:
- *What does this mean?*
Think of the model as a digital brain that learns from many examples. "Parameters" are like the connections in this brain, and **1 billion parameters** indicate a compact model that is still powerful enough to process and generate information intelligently. Even though it's considered small compared to giant models, it's highly optimized reasoning tasks.
## Architecture and Training
- **Base Model:** Built on the *meta-llama/Llama-3.2-1B* architecture from Meta, optimized using advanced agent mixing techniques in AAA (AI aligning AI) mode.
- **Training and Data Generation Process** ๐:
The training process leveraged advanced synthetic data generation techniques to create a diverse and extensive dataset comprising billions of tokens. This was achieved through a multi-stage process involving data generation, reasoning chain creation, and translation to ensure high-quality training data.
This approach resulted in a dataset with **billions of tokens**, enabling robust and diverse training for the entire iSA-02 series by NeuraLake, thereby enhancing the model's ability to perform complex reasoning.
- **Context Window** ๐๏ธ: The extension to **1,048,576 tokens** allows the model to handle large amounts of text or information, benefiting applications that require deep analysis.
## Intended Use
- **Corporate Customization** ๐ข: Fine-tune the model to address specific challenges and tasks within various business domains.
- **Text Generation Applications** โ๏ธ: Suitable for content creation, customer support automation, long-form text analysis with Retrieval-Augmented Generation (RAG), and answering intricate queries.
- **Research and Development** ๐ฌ: An excellent tool for exploring innovative approaches in natural language processing (NLP) that leverage large context windows for enhanced understanding and reasoning.
## Limitations and Recommendations
- **Fine-Tuning Recommended** ๐ง: While the *iSA-02-Nano-Llama-3.2-1B* has a 1,048,576-token context window, it is strongly recommended to fine-tune the model for specific tasks to achieve optimal performance and avoid token repetition.
- **Challenges with Large Contexts** โก: Utilizing such large context windows may require significant computational resources and meticulous fine-tuning to maintain response quality.
- **Continuous Feedback** ๐ฌ: Users are encouraged to report issues and suggest improvements to continuously enhance the model.
## Simplified Explanation
Think of the model as a super reader and writer. ๐โ๏ธ
- **Context Window** ๐๏ธ: Imagine it as the number of pages in a book the model can read at once. With **1,048,576 tokens**, it can "read" a massive chunk of information simultaneously, allowing for a deep understanding of the topic.
- **1 Billion Parameters** ๐ง : These are the "buttons" or "connectors" in the model's digital brain. The more parameters, the more details it can learn and understand. Even as a small model, it is optimized for performing complex reasoning, ensuring smart and coherent responses.
## Initial Idea: Why We Are Doing This
The journey towards the iSA-02 series (with more to follow) began with an unexpected experiment in January 2024. By combining two datasets that were initially thought to be flawed and unusable, and guided by the belief that **'AI is so new that every approach is worth exploring'**, we stumbled upon the first signs of reasoning abilities in a base model we were testing.
This discovery allowed us to unlock hidden insights and behaviors within the models by tapping into the already existing, but previously hidden, reasoning capabilities. We leveraged the model itself to guide us, allowing it to reflect on its own process. From there, we pushed the boundaries, generating new data that led to more extrapolated and refined outcomes.
## Contributions and Feedback
The **NeuraLake** synthetic data platform was the foundation for creating this model, and we are open to questions, suggestions, and collaborations. If you have feedback or want to contribute to the development and improvement of the *iSA-02-Nano-Llama-3.2-1B*, feel free to leave a comment in the community tab.
**Your feedback is essential for us to evolve and reach an even more robust final version!** ๐
## License
This model is distributed under the [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0) license.
## Ethical Considerations
While the *iSA-02-Nano-Llama-3.2-1B* is optimized for advanced reasoning tasks, users should be aware of potential biases present in the training data. We recommend thorough evaluation and fine-tuning to mitigate unintended biases and ensure fair and ethical use of the model.
## Frequently Asked Questions (FAQ)
**Q1: How does the extended context window benefit text generation tasks?**
**A:** The extended context window allows the model to maintain coherence and context over much longer passages of text and reasoning, performing better for tasks that require understanding and generating large documents, compared to the base standard base model.
**Q2: What computational resources are required to run the *iSA-02-Nano-Llama-3.2-1B*?**
**A:** Due to its large context window, running the model efficiently requires significant memory and processing power. We recommend using GPUs with ample VRAM and optimized configurations for optimal performance. Using vLLM and setting max_model_len to 100.000 tokens, it uses between 9GB to 12GB of vRAM.
Got it! Hereโs the updated format for the Hugging Face (HF) model card:
### **Q3: Can the model be fine-tuned on proprietary datasets?**
**A:** Yes, the model is designed to be fine-tuned on specific datasets to tailor its performance to particular applications or domains. Add this to your dataset, as the model uses structural tags to guide reasoning:
```text
<User_Prompt>
User prompt
</User_Prompt>
<Reasoning>
The model chain of thought
</Reasoning>
<Answer>
Here is the final answer
</Answer>
```
NeuraLake will provide a comprehensive guide on how to fine-tune the model, along with a small sample dataset available under the MIT license.
----------
## Usage Example
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("NeuraLakeAi/iSA-02-Nano-Llama-3.2-1B")
model = AutoModelForCausalLM.from_pretrained("NeuraLakeAi/iSA-02-Nano-Llama-3.2-1B")
input_text = "Explain the significance of the extended context window in modern NLP models."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=500)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
# OpenAi Compatible API:
```python
from openai import OpenAI
client = OpenAI(
api_key="any",
base_url="http://localhost:8000/v1"
)
prompt = input("Prompt: ")
completion = client.chat.completions.create(
model="NeuraLakeAi/iSA-02-Nano-Llama-3.2-1B",
messages=[
{"role": "system", "content": " "},
{"role": "user", "content": prompt}
],
stream=True,
max_tokens = 90000,
)
for chunk in completion:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
print() # Added a line break to the end of the answer
```
## References
** Card Under development** |
sae-rad/bugged_scaling_laws_vlm_0.001 | sae-rad | 2025-02-26T04:18:28Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-02-26T04:17:25Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
KingEmpire/Ain_1 | KingEmpire | 2025-02-26T04:18:26Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-02-26T03:58:32Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
KingEmpire/Ain_4 | KingEmpire | 2025-02-26T04:18:19Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-02-26T03:58:33Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
KingEmpire/Ain_5 | KingEmpire | 2025-02-26T04:17:54Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-02-26T03:58:33Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
NeuraLakeAi/iSA-02-Nano-Llama-3.2-1B-GGUF | NeuraLakeAi | 2025-02-26T04:15:34Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"facebook",
"meta",
"pytorch",
"reasoning",
"context-dynamic",
"small-models",
"synthetic-data",
"function-calls",
"open-source",
"llama",
"NeuraLake",
"brazil",
"1B",
"base_model:NeuraLakeAi/iSA-02-Nano-Llama-3.2-1B",
"base_model:quantized:NeuraLakeAi/iSA-02-Nano-Llama-3.2-1B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-02-26T04:05:20Z | ---
tags:
- text-generation
- transformers
- facebook
- meta
- pytorch
- reasoning
- context-dynamic
- small-models
- synthetic-data
- function-calls
- open-source
- llama
- NeuraLake
- brazil
- 1B
license: apache-2.0
base_model: NeuraLakeAi/iSA-02-Nano-Llama-3.2-1B
model_creator: Celso H A Diniz
model_name: NeuraLakeAi/iSA-02-Nano-Llama-3.2-1B
---
# NeuraLakeAi/iSA-02-Nano-Llama-3.2-1B-GGUF (v1.2)
## Overview
The *iSA-02-Nano-Llama-3.2-1B* is a **Base Model** designed for text generation, optimized for reasoning tasks. Based on *meta-llama/Llama-3.2-1B*, this model has been deeply customized by **NeuraLake** and stands out for its ability to work with an extended context window of **1,048,576 tokens**. It was created to allow businesses and developers to fine-tune it for specific tasks that require processing large volumes of information. Designed by NeuraLake using synthetic datasets, the model embodies the philosophy of **"think before you speak,"** enhancing reasoning capabilities for small-scale models.
**โจ Extended Context Window โจ:** The *iSA-02-Nano-Llama-3.2-1B* features an unprecedented context window of **1,048,576 tokens**, enabling the analysis and generation of extremely long and complex texts. This sets a new standard for small yet powerful reasoning models. ๐
## Key Features
- **Extended Context** ๐: Supports up to **1,048,576 tokens**, enabling the analysis and generation of long, complex texts.
- **Advanced Reasoning** ๐ง : Integrates sophisticated reasoning chains for handling complex tasks.
- **Customization** ๐ง: Ideal for businesses seeking to tailor the model to specific tasks, with a robust framework for further fine-tuning and training.
- **Compact Yet Powerful** ๐ก:
- *What does this mean?*
Think of the model as a digital brain that learns from many examples. "Parameters" are like the connections in this brain, and **1 billion parameters** indicate a compact model that is still powerful enough to process and generate information intelligently. Even though it's considered small compared to giant models, it's highly optimized reasoning tasks.
## Architecture and Training
- **Base Model:** Built on the *meta-llama/Llama-3.2-1B* architecture from Meta, optimized using advanced agent mixing techniques in AAA (AI aligning AI) mode.
- **Training and Data Generation Process** ๐:
The training process leveraged advanced synthetic data generation techniques to create a diverse and extensive dataset comprising billions of tokens. This was achieved through a multi-stage process involving data generation, reasoning chain creation, and translation to ensure high-quality training data.
This approach resulted in a dataset with **billions of tokens**, enabling robust and diverse training for the entire iSA-02 series by NeuraLake, thereby enhancing the model's ability to perform complex reasoning.
- **Context Window** ๐๏ธ: The extension to **1,048,576 tokens** allows the model to handle large amounts of text or information, benefiting applications that require deep analysis.
## Intended Use
- **Corporate Customization** ๐ข: Fine-tune the model to address specific challenges and tasks within various business domains.
- **Text Generation Applications** โ๏ธ: Suitable for content creation, customer support automation, long-form text analysis with Retrieval-Augmented Generation (RAG), and answering intricate queries.
- **Research and Development** ๐ฌ: An excellent tool for exploring innovative approaches in natural language processing (NLP) that leverage large context windows for enhanced understanding and reasoning.
## Limitations and Recommendations
- **Fine-Tuning Recommended** ๐ง: While the *iSA-02-Nano-Llama-3.2-1B* has a 1,048,576-token context window, it is strongly recommended to fine-tune the model for specific tasks to achieve optimal performance and avoid token repetition.
- **Challenges with Large Contexts** โก: Utilizing such large context windows may require significant computational resources and meticulous fine-tuning to maintain response quality.
- **Continuous Feedback** ๐ฌ: Users are encouraged to report issues and suggest improvements to continuously enhance the model.
## Simplified Explanation
Think of the model as a super reader and writer. ๐โ๏ธ
- **Context Window** ๐๏ธ: Imagine it as the number of pages in a book the model can read at once. With **1,048,576 tokens**, it can "read" a massive chunk of information simultaneously, allowing for a deep understanding of the topic.
- **1 Billion Parameters** ๐ง : These are the "buttons" or "connectors" in the model's digital brain. The more parameters, the more details it can learn and understand. Even as a small model, it is optimized for performing complex reasoning, ensuring smart and coherent responses.
## Initial Idea: Why We Are Doing This
The journey towards the iSA-02 series (with more to follow) began with an unexpected experiment in January 2024. By combining two datasets that were initially thought to be flawed and unusable, and guided by the belief that **'AI is so new that every approach is worth exploring'**, we stumbled upon the first signs of reasoning abilities in a base model we were testing.
This discovery allowed us to unlock hidden insights and behaviors within the models by tapping into the already existing, but previously hidden, reasoning capabilities. We leveraged the model itself to guide us, allowing it to reflect on its own process. From there, we pushed the boundaries, generating new data that led to more extrapolated and refined outcomes.
## Contributions and Feedback
The **NeuraLake** synthetic data platform was the foundation for creating this model, and we are open to questions, suggestions, and collaborations. If you have feedback or want to contribute to the development and improvement of the *iSA-02-Nano-Llama-3.2-1B*, feel free to leave a comment in the community tab.
**Your feedback is essential for us to evolve and reach an even more robust final version!** ๐
## License
This model is distributed under the [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0) license.
## Ethical Considerations
While the *iSA-02-Nano-Llama-3.2-1B* is optimized for advanced reasoning tasks, users should be aware of potential biases present in the training data. We recommend thorough evaluation and fine-tuning to mitigate unintended biases and ensure fair and ethical use of the model.
## Frequently Asked Questions (FAQ)
**Q1: How does the extended context window benefit text generation tasks?**
**A:** The extended context window allows the model to maintain coherence and context over much longer passages of text and reasoning, performing better for tasks that require understanding and generating large documents, compared to the base standard base model.
**Q2: What computational resources are required to run the *iSA-02-Nano-Llama-3.2-1B*?**
**A:** Due to its large context window, running the model efficiently requires significant memory and processing power. We recommend using GPUs with ample VRAM and optimized configurations for optimal performance. Using vLLM and setting max_model_len to 100.000 tokens, it uses between 9GB to 12GB of vRAM.
Got it! Hereโs the updated format for the Hugging Face (HF) model card:
### **Q3: Can the model be fine-tuned on proprietary datasets?**
**A:** Yes, the model is designed to be fine-tuned on specific datasets to tailor its performance to particular applications or domains. Add this to your dataset, as the model uses structural tags to guide reasoning:
```text
<User_Prompt>
User prompt
</User_Prompt>
<Reasoning>
The model chain of thought
</Reasoning>
<Answer>
Here is the final answer
</Answer>
```
NeuraLake will provide a comprehensive guide on how to fine-tune the model, along with a small sample dataset available under the MIT license.
----------
## Usage Example
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("NeuraLakeAi/iSA-02-Nano-Llama-3.2-1B")
model = AutoModelForCausalLM.from_pretrained("NeuraLakeAi/iSA-02-Nano-Llama-3.2-1B")
input_text = "Explain the significance of the extended context window in modern NLP models."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=500)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
# OpenAi Compatible API:
```python
from openai import OpenAI
client = OpenAI(
api_key="any",
base_url="http://localhost:8000/v1"
)
prompt = input("Prompt: ")
completion = client.chat.completions.create(
model="NeuraLakeAi/iSA-02-Nano-Llama-3.2-1B",
messages=[
{"role": "system", "content": " "},
{"role": "user", "content": prompt}
],
stream=True,
max_tokens = 90000,
)
for chunk in completion:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
print() # Added a line break to the end of the answer
```
## References
** Card Under development**
|
zoujunyi/Huatuo-DeepSeek-32B | zoujunyi | 2025-02-26T04:15:14Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-02-26T04:15:14Z | ---
license: apache-2.0
---
|
615guy/kitchen-design | 615guy | 2025-02-26T04:14:08Z | 0 | 0 | null | [
"license:openrail++",
"region:us"
] | null | 2025-02-26T04:14:08Z | ---
license: openrail++
---
|
mradermacher/Mistral-Small-24B-Instruct-2501-i1-GGUF | mradermacher | 2025-02-26T04:13:31Z | 5,495 | 5 | transformers | [
"transformers",
"gguf",
"en",
"fr",
"de",
"es",
"it",
"pt",
"zh",
"ja",
"ru",
"ko",
"base_model:mistralai/Mistral-Small-24B-Instruct-2501",
"base_model:quantized:mistralai/Mistral-Small-24B-Instruct-2501",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-30T18:45:09Z | ---
base_model: mistralai/Mistral-Small-24B-Instruct-2501
language:
- en
- fr
- de
- es
- it
- pt
- zh
- ja
- ru
- ko
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- transformers
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501.i1-IQ1_S.gguf) | i1-IQ1_S | 5.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501.i1-IQ1_M.gguf) | i1-IQ1_M | 5.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501.i1-IQ2_S.gguf) | i1-IQ2_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501.i1-IQ2_M.gguf) | i1-IQ2_M | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501.i1-Q2_K_S.gguf) | i1-Q2_K_S | 8.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501.i1-Q2_K.gguf) | i1-Q2_K | 9.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501.i1-IQ3_S.gguf) | i1-IQ3_S | 10.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501.i1-IQ3_M.gguf) | i1-IQ3_M | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501.i1-Q4_0.gguf) | i1-Q4_0 | 13.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501.i1-Q4_1.gguf) | i1-Q4_1 | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501.i1-Q5_K_M.gguf) | i1-Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501.i1-Q6_K.gguf) | i1-Q6_K | 19.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
7Dragons/unstoppable_66 | 7Dragons | 2025-02-26T04:12:22Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-02-26T04:06:29Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Sveston/example_model | Sveston | 2025-02-26T04:11:29Z | 0 | 0 | null | [
"region:us"
] | null | 2025-02-23T10:00:55Z | #Example Model
This is my model card README
---
license: mit
---
|
Jonjew/HarmonyinFusionv3 | Jonjew | 2025-02-26T04:10:19Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] | text-to-image | 2025-02-26T04:06:51Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
diyngyng style , A digital diyngyng style drawing of a yin-yang composed of
swirling ocean waves and towering mountains. Dolphins leap through the
waves, while eagles soar above the mountains, emphasizing the balance
between land and sea with vibrant, high-contrast colors.,
<lora:harmony-in-fusion_v30_rank32_bf16-step00576:1>
output:
url: images/00123-2025-01-20-3509286054.png
- text: >-
diyngyng style , A watercolor diyngyng style painting of a yin-yang symbol
with one half as a sunflower field under a bright sky, the other as a
moonlit meadow with fireflies. The colors blend beautifully, creating a warm
and cool contrast that embodies day and night.,
<lora:harmony-in-fusion_v30_rank32_bf16-step00576:1>
output:
url: images/00107-2025-01-20-1704007027.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: diyngyng
license: unknown
---
# Harmony in Fusion v3
<Gallery />
## Model description
FROM https://civitai.com/models/936792/harmony-in-fusion
Trigger diyngyng
Strength 0.8 - 1.2
Harmony in Fusion is a LoRA that gracefully merges traditional Eastern aesthetics with contemporary digital creativity, creating art that exudes balance and serenity. Trained on artistic images featuring yin-yang symbolism, this model captures the essence of harmonious contrast, weaving subtle and sometimes bold yin-yang shapes into each piece. Every image is an infusion of complementary elements, from koi fish in synchronized flow to tigers and dragons locked in symbolic duality. The delicate dance of light and dark, warm and cool, conveys a sense of unity amidst opposition.
This LoRA doesnโt just create yin-yang art; it infuses it with a digital twist, bringing โHarmony in Fusionโ to lifeโboth as a balanced composition and a harmonious infusion of traditional symbolism into modern form. Use this LoRA to create oppositional subjects that flow harmoniously in balance.
I've debated releasing Version 3.0 for a few months. Rather than having to prompt a yin-yang shape, the model pulls your image in that direction regardless of whether or not you prompt it. I think the effect can be fun so I decided to share it. Version 2.0 is still the more flexible one, but Version 3.0 can lead to some interesting art.
Usage
To use the most recent version of the LoRA, use the following settings:
Trigger word: ink and brushstroke diyngyng style, as "a diyngyng style watercolor painting" or "a diyngyng style pencil sketch."
Other tokens that work well: yin-yang symbol, yin-yang configuration, contrasting shades, opposing, opposite, intertwined, circular
Lora Strength: A strength between 0.8 and 1.2 is recommended; higher strength should enhance the abstract quality of the style.
## Trigger words
You should use `diyngyng` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/HarmonyinFusionv3/tree/main) them in the Files & versions tab.
|
Kuongan/xlm-roberta-base-som-noaug | Kuongan | 2025-02-26T04:09:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-02-26T03:50:10Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: xlm-roberta-base-som-noaug
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-som-noaug
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3066
- F1: 0.3637
- Roc Auc: 0.6383
- Accuracy: 0.4788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.3432 | 1.0 | 106 | 0.3461 | 0.0 | 0.5 | 0.3816 |
| 0.3258 | 2.0 | 212 | 0.3400 | 0.0 | 0.5 | 0.3816 |
| 0.3076 | 3.0 | 318 | 0.3131 | 0.0692 | 0.5230 | 0.4099 |
| 0.2959 | 4.0 | 424 | 0.3027 | 0.0745 | 0.5265 | 0.4046 |
| 0.2854 | 5.0 | 530 | 0.2920 | 0.2137 | 0.5712 | 0.4629 |
| 0.2715 | 6.0 | 636 | 0.2937 | 0.1736 | 0.5626 | 0.4576 |
| 0.233 | 7.0 | 742 | 0.3147 | 0.2084 | 0.5835 | 0.4594 |
| 0.2275 | 8.0 | 848 | 0.2829 | 0.2705 | 0.5963 | 0.4912 |
| 0.2052 | 9.0 | 954 | 0.2919 | 0.2695 | 0.6049 | 0.4735 |
| 0.186 | 10.0 | 1060 | 0.3022 | 0.2667 | 0.6142 | 0.4682 |
| 0.1805 | 11.0 | 1166 | 0.3008 | 0.3441 | 0.6314 | 0.4700 |
| 0.1808 | 12.0 | 1272 | 0.2973 | 0.3154 | 0.6202 | 0.4823 |
| 0.1516 | 13.0 | 1378 | 0.3045 | 0.3540 | 0.6412 | 0.4664 |
| 0.1532 | 14.0 | 1484 | 0.3053 | 0.3408 | 0.6302 | 0.4576 |
| 0.1466 | 15.0 | 1590 | 0.3000 | 0.3593 | 0.6400 | 0.4806 |
| 0.1373 | 16.0 | 1696 | 0.3056 | 0.3503 | 0.6358 | 0.4753 |
| 0.1343 | 17.0 | 1802 | 0.3054 | 0.3472 | 0.6336 | 0.4735 |
| 0.1326 | 18.0 | 1908 | 0.3053 | 0.3614 | 0.6376 | 0.4770 |
| 0.116 | 19.0 | 2014 | 0.3051 | 0.3635 | 0.6375 | 0.4788 |
| 0.1288 | 20.0 | 2120 | 0.3066 | 0.3637 | 0.6383 | 0.4788 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
kainatq/KPRA-7b | kainatq | 2025-02-26T04:07:13Z | 22 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:ChaoticNeutrals/RP_Vision_7B",
"base_model:merge:ChaoticNeutrals/RP_Vision_7B",
"base_model:Endevor/InfinityRP-v1-7B",
"base_model:merge:Endevor/InfinityRP-v1-7B",
"base_model:MaziyarPanahi/Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.2-slerp",
"base_model:merge:MaziyarPanahi/Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.2-slerp",
"base_model:ResplendentAI/DaturaCookie_7B",
"base_model:merge:ResplendentAI/DaturaCookie_7B",
"base_model:icefog72/IceCocoaRP-7b",
"base_model:merge:icefog72/IceCocoaRP-7b",
"base_model:icefog72/IceDrunkenCherryRP-7b",
"base_model:merge:icefog72/IceDrunkenCherryRP-7b",
"base_model:kainatq/Kainoverse-7b-v0.1",
"base_model:merge:kainatq/Kainoverse-7b-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-13T06:29:04Z | ---
base_model:
- ChaoticNeutrals/RP_Vision_7B
- kainatq/Kainoverse-7b-v0.1
- icefog72/IceCocoaRP-7b
- icefog72/IceDrunkenCherryRP-7b
- MaziyarPanahi/Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.2-slerp
- ResplendentAI/DaturaCookie_7B
- Endevor/InfinityRP-v1-7B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
.png)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [kainatq/Kainoverse-7b-v0.1](https://huggingface.co/kainatq/Kainoverse-7b-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [ChaoticNeutrals/RP_Vision_7B](https://huggingface.co/ChaoticNeutrals/RP_Vision_7B)
* [icefog72/IceCocoaRP-7b](https://huggingface.co/icefog72/IceCocoaRP-7b)
* [icefog72/IceDrunkenCherryRP-7b](https://huggingface.co/icefog72/IceDrunkenCherryRP-7b)
* [MaziyarPanahi/Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.2-slerp](https://huggingface.co/MaziyarPanahi/Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.2-slerp)
* [ResplendentAI/DaturaCookie_7B](https://huggingface.co/ResplendentAI/DaturaCookie_7B)
* [Endevor/InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: model_stock
base_model: kainatq/Kainoverse-7b-v0.1
parameters:
models:
- model: ResplendentAI/DaturaCookie_7B
- model: icefog72/IceDrunkenCherryRP-7b
- model: ChaoticNeutrals/RP_Vision_7B
- model: Endevor/InfinityRP-v1-7B
- model: MaziyarPanahi/Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.2-slerp
- model: icefog72/IceCocoaRP-7b
dtype: bfloat16
```
|
robiulawaldev/bea5168f-32a4-4881-90ca-21dd625274f8 | robiulawaldev | 2025-02-26T04:07:10Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:unsloth/codellama-7b",
"base_model:adapter:unsloth/codellama-7b",
"region:us"
] | null | 2025-02-26T04:06:53Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/codellama-7b
model-index:
- name: robiulawaldev/bea5168f-32a4-4881-90ca-21dd625274f8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robiulawaldev/bea5168f-32a4-4881-90ca-21dd625274f8
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0009
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Kuongan/CS221-xlm-roberta-base-tat-noaug-finetuned-tat-tapt | Kuongan | 2025-02-26T04:05:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:Kuongan/xlm-roberta-base-tat-noaug",
"base_model:finetune:Kuongan/xlm-roberta-base-tat-noaug",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-02-26T04:00:41Z | ---
library_name: transformers
license: mit
base_model: Kuongan/xlm-roberta-base-tat-noaug
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: CS221-xlm-roberta-base-tat-noaug-finetuned-tat-tapt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS221-xlm-roberta-base-tat-noaug-finetuned-tat-tapt
This model is a fine-tuned version of [Kuongan/xlm-roberta-base-tat-noaug](https://huggingface.co/Kuongan/xlm-roberta-base-tat-noaug) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1452
- F1: 0.6555
- Roc Auc: 0.8121
- Accuracy: 0.7864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.1478 | 1.0 | 55 | 0.1514 | 0.6006 | 0.7892 | 0.7773 |
| 0.1576 | 2.0 | 110 | 0.1502 | 0.6007 | 0.7819 | 0.7659 |
| 0.1392 | 3.0 | 165 | 0.1715 | 0.5792 | 0.7728 | 0.7545 |
| 0.1384 | 4.0 | 220 | 0.1452 | 0.6555 | 0.8121 | 0.7864 |
| 0.109 | 5.0 | 275 | 0.1540 | 0.6055 | 0.7802 | 0.7591 |
| 0.1034 | 6.0 | 330 | 0.1603 | 0.5900 | 0.7742 | 0.7591 |
| 0.0918 | 7.0 | 385 | 0.1867 | 0.5998 | 0.7724 | 0.7318 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
AnnaoeFranko/GlucoExtend | AnnaoeFranko | 2025-02-26T04:05:36Z | 0 | 0 | null | [
"region:us"
] | null | 2025-02-26T04:04:49Z |
Official Website:- https://supplementcarts.com/gluco-extend-official/
[Blood sugar](https://supplementcarts.com/gluco-extend-official/ ) management is a crucial aspect of overall health, especially for those dealing with diabetes or prediabetes. Unstable glucose levels can lead to various health complications, including fatigue, weight gain, and cardiovascular diseases. Gluco Extend has emerged as a promising dietary supplement that supports healthy blood sugar levels using natural ingredients. This article delves deep into Gluco Extend, exploring its benefits, ingredients, usage, scientific backing, and user testimonials.
|
Kei5uke/deepseek | Kei5uke | 2025-02-26T04:02:04Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-26T03:43:09Z | ---
base_model: unsloth/deepseek-r1-distill-qwen-14b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Kei5uke
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-14b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Kuongan/xlm-roberta-base-rus-noaug | Kuongan | 2025-02-26T04:00:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-02-26T03:49:58Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: xlm-roberta-base-rus-noaug
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-rus-noaug
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1776
- F1: 0.8139
- Roc Auc: 0.8817
- Accuracy: 0.7437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.4354 | 1.0 | 84 | 0.4392 | 0.0 | 0.5 | 0.1608 |
| 0.3744 | 2.0 | 168 | 0.3579 | 0.2399 | 0.5998 | 0.3668 |
| 0.2949 | 3.0 | 252 | 0.2638 | 0.5115 | 0.7243 | 0.5879 |
| 0.2211 | 4.0 | 336 | 0.2227 | 0.6541 | 0.7716 | 0.6231 |
| 0.1898 | 5.0 | 420 | 0.1879 | 0.7815 | 0.8500 | 0.7286 |
| 0.1404 | 6.0 | 504 | 0.1775 | 0.8031 | 0.8641 | 0.7236 |
| 0.1202 | 7.0 | 588 | 0.1729 | 0.8105 | 0.8719 | 0.7387 |
| 0.1083 | 8.0 | 672 | 0.1776 | 0.8139 | 0.8817 | 0.7437 |
| 0.0828 | 9.0 | 756 | 0.1818 | 0.7988 | 0.8619 | 0.7387 |
| 0.0686 | 10.0 | 840 | 0.1755 | 0.8079 | 0.8678 | 0.7538 |
| 0.0691 | 11.0 | 924 | 0.1860 | 0.8050 | 0.8704 | 0.7337 |
| 0.059 | 12.0 | 1008 | 0.1695 | 0.8078 | 0.8700 | 0.7387 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
Kokoutou/Leipzig_9 | Kokoutou | 2025-02-26T03:59:44Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-02-26T03:44:34Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
samoline/4fb6091d-d88a-49fe-9d4c-ad3b17a39a59 | samoline | 2025-02-26T03:59:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-7b-128k",
"base_model:adapter:NousResearch/Yarn-Llama-2-7b-128k",
"region:us"
] | null | 2025-02-26T03:51:58Z | ---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4fb6091d-d88a-49fe-9d4c-ad3b17a39a59
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-7b-128k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- bd07874fa96e3b1a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/bd07874fa96e3b1a_train_data.json
type:
field_input: description
field_instruction: input persona
field_output: synthesized text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: false
group_by_length: false
hub_model_id: samoline/4fb6091d-d88a-49fe-9d4c-ad3b17a39a59
hub_repo: samoline
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 4
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 4
lora_target_linear: true
lr_scheduler: cosine
max_steps: 2
micro_batch_size: 1
mlflow_experiment_name: /tmp/bd07874fa96e3b1a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: samoline-nan
wandb_mode: online
wandb_name: c36f8c49-e5a9-4577-b0b9-4685f343695c
wandb_project: Gradients-On-Demand
wandb_run: dev
wandb_runid: c36f8c49-e5a9-4577-b0b9-4685f343695c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4fb6091d-d88a-49fe-9d4c-ad3b17a39a59
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-7b-128k](https://huggingface.co/NousResearch/Yarn-Llama-2-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.456 | 0.0000 | 1 | 1.2673 |
| 1.3132 | 0.0000 | 2 | 1.2674 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Kuongan/CS221-xlm-roberta-base-sun-noaug-finetuned-sun-tapt | Kuongan | 2025-02-26T03:59:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:Kuongan/xlm-roberta-base-sun-noaug",
"base_model:finetune:Kuongan/xlm-roberta-base-sun-noaug",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-02-26T03:56:09Z | ---
library_name: transformers
license: mit
base_model: Kuongan/xlm-roberta-base-sun-noaug
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: CS221-xlm-roberta-base-sun-noaug-finetuned-sun-tapt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS221-xlm-roberta-base-sun-noaug-finetuned-sun-tapt
This model is a fine-tuned version of [Kuongan/xlm-roberta-base-sun-noaug](https://huggingface.co/Kuongan/xlm-roberta-base-sun-noaug) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2988
- F1: 0.1537
- Roc Auc: 0.5
- Accuracy: 0.7195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.4561 | 1.0 | 52 | 0.2988 | 0.1537 | 0.5 | 0.7195 |
| 0.3015 | 2.0 | 104 | 0.2725 | 0.1537 | 0.5 | 0.7195 |
| 0.2883 | 3.0 | 156 | 0.2727 | 0.1537 | 0.5 | 0.7195 |
| 0.2817 | 4.0 | 208 | 0.2724 | 0.1537 | 0.5 | 0.7195 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
Kuongan/xlm-roberta-base-tat-noaug | Kuongan | 2025-02-26T03:58:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-02-26T03:50:36Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: xlm-roberta-base-tat-noaug
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-tat-noaug
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2840
- F1: 0.5056
- Roc Auc: 0.7198
- Accuracy: 0.58
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.7082 | 1.0 | 32 | 0.6681 | 0.0563 | 0.5 | 0.0 |
| 0.5103 | 2.0 | 64 | 0.3983 | 0.0 | 0.5 | 0.16 |
| 0.3937 | 3.0 | 96 | 0.3967 | 0.0 | 0.5 | 0.16 |
| 0.3863 | 4.0 | 128 | 0.3996 | 0.0 | 0.5 | 0.16 |
| 0.368 | 5.0 | 160 | 0.3604 | 0.1159 | 0.5527 | 0.24 |
| 0.3345 | 6.0 | 192 | 0.3290 | 0.2036 | 0.5791 | 0.315 |
| 0.3123 | 7.0 | 224 | 0.3166 | 0.2215 | 0.5865 | 0.32 |
| 0.2906 | 8.0 | 256 | 0.3117 | 0.3184 | 0.6347 | 0.44 |
| 0.2629 | 9.0 | 288 | 0.3097 | 0.3001 | 0.6186 | 0.4 |
| 0.2414 | 10.0 | 320 | 0.2926 | 0.4225 | 0.6743 | 0.485 |
| 0.2206 | 11.0 | 352 | 0.2947 | 0.4208 | 0.6796 | 0.505 |
| 0.1996 | 12.0 | 384 | 0.2882 | 0.4328 | 0.6861 | 0.545 |
| 0.1875 | 13.0 | 416 | 0.2820 | 0.4593 | 0.6988 | 0.54 |
| 0.1862 | 14.0 | 448 | 0.2852 | 0.4764 | 0.7050 | 0.555 |
| 0.1764 | 15.0 | 480 | 0.2903 | 0.4771 | 0.7085 | 0.565 |
| 0.1699 | 16.0 | 512 | 0.2897 | 0.4750 | 0.7103 | 0.56 |
| 0.1611 | 17.0 | 544 | 0.2894 | 0.4848 | 0.7114 | 0.56 |
| 0.1607 | 18.0 | 576 | 0.2840 | 0.5056 | 0.7198 | 0.58 |
| 0.1599 | 19.0 | 608 | 0.2861 | 0.5020 | 0.7184 | 0.575 |
| 0.1498 | 20.0 | 640 | 0.2859 | 0.5009 | 0.7178 | 0.57 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
Kokoutou/Leipzig_11 | Kokoutou | 2025-02-26T03:57:38Z | 0 | 0 | null | [
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-02-26T03:52:50Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
kainatq/Kaiden-Sakura-Violet-Square-Azura-crimson-12B-Q4_K_M-GGUF | kainatq | 2025-02-26T03:56:31Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:Epiculous/Azure_Dusk-v0.2",
"base_model:merge:Epiculous/Azure_Dusk-v0.2",
"base_model:Epiculous/Crimson_Dawn-v0.2",
"base_model:merge:Epiculous/Crimson_Dawn-v0.2",
"base_model:Epiculous/Violet_Twilight-v0.2",
"base_model:merge:Epiculous/Violet_Twilight-v0.2",
"base_model:Nitral-AI/Captain-Eris_Violet-V0.420-12B",
"base_model:merge:Nitral-AI/Captain-Eris_Violet-V0.420-12B",
"base_model:PocketDoc/Dans-SakuraKaze-V1.0.0-12b",
"base_model:merge:PocketDoc/Dans-SakuraKaze-V1.0.0-12b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-25T22:23:34Z | ---
base_model:
- Epiculous/Azure_Dusk-v0.2
- Nitral-AI/Captain-Eris_Violet-V0.420-12B
- Epiculous/Violet_Twilight-v0.2
- PocketDoc/Dans-SakuraKaze-V1.0.0-12b
- Epiculous/Crimson_Dawn-v0.2
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
.png)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
### Main Model:
https://huggingface.co/kainatq/Kaiden-Sakura-Violet-Square-Azura-crimson-12B
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [PocketDoc/Dans-SakuraKaze-V1.0.0-12b](https://huggingface.co/PocketDoc/Dans-SakuraKaze-V1.0.0-12b) as a base.
### prompt style
you need to use chatml.
### Models Merged
The following models were included in the merge:
* [Epiculous/Azure_Dusk-v0.2](https://huggingface.co/Epiculous/Azure_Dusk-v0.2)
* [Nitral-AI/Captain-Eris_Violet-V0.420-12B](https://huggingface.co/Nitral-AI/Captain-Eris_Violet-V0.420-12B)
* [Epiculous/Violet_Twilight-v0.2](https://huggingface.co/Epiculous/Violet_Twilight-v0.2)
* [Epiculous/Crimson_Dawn-v0.2](https://huggingface.co/Epiculous/Crimson_Dawn-v0.2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: model_stock
base_model: PocketDoc/Dans-SakuraKaze-V1.0.0-12b
parameters:
models:
- model: Nitral-AI/Captain-Eris_Violet-V0.420-12B
- model: Epiculous/Violet_Twilight-v0.2
- model: Epiculous/Azure_Dusk-v0.2
- model: Epiculous/Crimson_Dawn-v0.2
dtype: bfloat16
``` |
kainatq/Kaiden-Sakura-Violet-Square-Azura-crimson-12B-Q5_K_M-GGUF | kainatq | 2025-02-26T03:56:07Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:Epiculous/Azure_Dusk-v0.2",
"base_model:merge:Epiculous/Azure_Dusk-v0.2",
"base_model:Epiculous/Crimson_Dawn-v0.2",
"base_model:merge:Epiculous/Crimson_Dawn-v0.2",
"base_model:Epiculous/Violet_Twilight-v0.2",
"base_model:merge:Epiculous/Violet_Twilight-v0.2",
"base_model:Nitral-AI/Captain-Eris_Violet-V0.420-12B",
"base_model:merge:Nitral-AI/Captain-Eris_Violet-V0.420-12B",
"base_model:PocketDoc/Dans-SakuraKaze-V1.0.0-12b",
"base_model:merge:PocketDoc/Dans-SakuraKaze-V1.0.0-12b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-25T22:41:03Z | ---
base_model:
- Epiculous/Azure_Dusk-v0.2
- Nitral-AI/Captain-Eris_Violet-V0.420-12B
- Epiculous/Violet_Twilight-v0.2
- PocketDoc/Dans-SakuraKaze-V1.0.0-12b
- Epiculous/Crimson_Dawn-v0.2
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
.png)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
### Main Model:
https://huggingface.co/kainatq/Kaiden-Sakura-Violet-Square-Azura-crimson-12B
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [PocketDoc/Dans-SakuraKaze-V1.0.0-12b](https://huggingface.co/PocketDoc/Dans-SakuraKaze-V1.0.0-12b) as a base.
### prompt style
you need to use chatml.
### Models Merged
The following models were included in the merge:
* [Epiculous/Azure_Dusk-v0.2](https://huggingface.co/Epiculous/Azure_Dusk-v0.2)
* [Nitral-AI/Captain-Eris_Violet-V0.420-12B](https://huggingface.co/Nitral-AI/Captain-Eris_Violet-V0.420-12B)
* [Epiculous/Violet_Twilight-v0.2](https://huggingface.co/Epiculous/Violet_Twilight-v0.2)
* [Epiculous/Crimson_Dawn-v0.2](https://huggingface.co/Epiculous/Crimson_Dawn-v0.2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: model_stock
base_model: PocketDoc/Dans-SakuraKaze-V1.0.0-12b
parameters:
models:
- model: Nitral-AI/Captain-Eris_Violet-V0.420-12B
- model: Epiculous/Violet_Twilight-v0.2
- model: Epiculous/Azure_Dusk-v0.2
- model: Epiculous/Crimson_Dawn-v0.2
dtype: bfloat16
``` |
blattimer/Llama-3.2-1B-Instruct | blattimer | 2025-02-26T03:55:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-26T03:48:47Z | ---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B-Instruct
tags:
- generated_from_trainer
model-index:
- name: Llama-3.2-1B-Instruct
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.2-1B-Instruct
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.49.0
- Pytorch 2.4.0+cu121
- Datasets 3.3.2
- Tokenizers 0.21.0
|
qing-yao/strict_balanced_cf_seed-21_1e-3 | qing-yao | 2025-02-26T03:55:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-25T17:20:30Z | ---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: strict_balanced_cf_seed-21_1e-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# strict_balanced_cf_seed-21_1e-3
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1894
- Accuracy: 0.4008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 21
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|
| 5.9825 | 0.9998 | 1486 | 4.4171 | 0.2926 |
| 4.3054 | 1.9997 | 2972 | 3.9050 | 0.3329 |
| 3.6755 | 2.9997 | 4458 | 3.6276 | 0.3573 |
| 3.4878 | 3.9996 | 5944 | 3.4715 | 0.3714 |
| 3.2604 | 4.9995 | 7430 | 3.3707 | 0.3811 |
| 3.1894 | 5.9994 | 8916 | 3.3120 | 0.3864 |
| 3.0822 | 6.9993 | 10402 | 3.2720 | 0.3903 |
| 3.0424 | 7.9999 | 11889 | 3.2489 | 0.3915 |
| 2.9835 | 8.9998 | 13375 | 3.2300 | 0.3943 |
| 2.9586 | 9.9997 | 14861 | 3.2202 | 0.3957 |
| 2.9205 | 10.9997 | 16347 | 3.2069 | 0.3972 |
| 2.901 | 11.9996 | 17833 | 3.2097 | 0.3975 |
| 2.8789 | 12.9995 | 19319 | 3.1967 | 0.3987 |
| 2.8594 | 13.9994 | 20805 | 3.1981 | 0.3986 |
| 2.8502 | 14.9993 | 22291 | 3.1954 | 0.3996 |
| 2.8349 | 15.9999 | 23778 | 3.1954 | 0.3996 |
| 2.8319 | 16.9998 | 25264 | 3.1878 | 0.4001 |
| 2.8127 | 17.9997 | 26750 | 3.1866 | 0.4005 |
| 2.8195 | 18.9997 | 28236 | 3.1900 | 0.4002 |
| 2.7995 | 19.9982 | 29720 | 3.1894 | 0.4008 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.20.0
|
kainatq/Kaiden-Sakura-Violet-Square-Azura-crimson-12B | kainatq | 2025-02-26T03:55:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:Epiculous/Azure_Dusk-v0.2",
"base_model:merge:Epiculous/Azure_Dusk-v0.2",
"base_model:Epiculous/Crimson_Dawn-v0.2",
"base_model:merge:Epiculous/Crimson_Dawn-v0.2",
"base_model:Epiculous/Violet_Twilight-v0.2",
"base_model:merge:Epiculous/Violet_Twilight-v0.2",
"base_model:Nitral-AI/Captain-Eris_Violet-V0.420-12B",
"base_model:merge:Nitral-AI/Captain-Eris_Violet-V0.420-12B",
"base_model:PocketDoc/Dans-SakuraKaze-V1.0.0-12b",
"base_model:merge:PocketDoc/Dans-SakuraKaze-V1.0.0-12b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-25T21:23:57Z | ---
base_model:
- Epiculous/Azure_Dusk-v0.2
- Nitral-AI/Captain-Eris_Violet-V0.420-12B
- Epiculous/Violet_Twilight-v0.2
- PocketDoc/Dans-SakuraKaze-V1.0.0-12b
- Epiculous/Crimson_Dawn-v0.2
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
.png)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [PocketDoc/Dans-SakuraKaze-V1.0.0-12b](https://huggingface.co/PocketDoc/Dans-SakuraKaze-V1.0.0-12b) as a base.
### prompt style
you need to use chatml.
### GGUF:
here are direct download link to all ggufs:
[Q5_K_M](https://huggingface.co/kainatq/Kaiden-Sakura-Violet-Square-Azura-crimson-12B-Q5_K_M-GGUF/resolve/main/kaiden-sakura-violet-square-azura-crimson-12b-q5_k_m.gguf)
[Q4_K_M](https://huggingface.co/kainatq/Kaiden-Sakura-Violet-Square-Azura-crimson-12B-Q4_K_M-GGUF/resolve/main/kaiden-sakura-violet-square-azura-crimson-12b-q4_k_m.gguf)
[Q3_K_L](https://huggingface.co/kainatq/Kaiden-Sakura-Violet-Square-Azura-crimson-12B-Q3_K_L-GGUF/resolve/main/kaiden-sakura-violet-square-azura-crimson-12b-q3_k_l.gguf)
[Q3_K_M](https://huggingface.co/kainatq/Kaiden-Sakura-Violet-Square-Azura-crimson-12B-Q3_K_M-GGUF/resolve/main/kaiden-sakura-violet-square-azura-crimson-12b-q3_k_m.gguf)
### for oobabooga/text-generation-webui:
here are all the links if you want to download frim oobabooga/text-generation-webui
[Q5_K_M](https://huggingface.co/kainatq/Kaiden-Sakura-Violet-Square-Azura-crimson-12B-Q5_K_M-GGUF)
[Q4_k_M](https://huggingface.co/kainatq/Kaiden-Sakura-Violet-Square-Azura-crimson-12B-Q4_K_M-GGUF)
[Q3_K_L](https://huggingface.co/kainatq/Kaiden-Sakura-Violet-Square-Azura-crimson-12B-Q3_K_L-GGUF)
[Q3_K_M](https://huggingface.co/kainatq/Kaiden-Sakura-Violet-Square-Azura-crimson-12B-Q3_K_M-GGUF)
### Models Merged
The following models were included in the merge:
* [Epiculous/Azure_Dusk-v0.2](https://huggingface.co/Epiculous/Azure_Dusk-v0.2)
* [Nitral-AI/Captain-Eris_Violet-V0.420-12B](https://huggingface.co/Nitral-AI/Captain-Eris_Violet-V0.420-12B)
* [Epiculous/Violet_Twilight-v0.2](https://huggingface.co/Epiculous/Violet_Twilight-v0.2)
* [Epiculous/Crimson_Dawn-v0.2](https://huggingface.co/Epiculous/Crimson_Dawn-v0.2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: model_stock
base_model: PocketDoc/Dans-SakuraKaze-V1.0.0-12b
parameters:
models:
- model: Nitral-AI/Captain-Eris_Violet-V0.420-12B
- model: Epiculous/Violet_Twilight-v0.2
- model: Epiculous/Azure_Dusk-v0.2
- model: Epiculous/Crimson_Dawn-v0.2
dtype: bfloat16
``` |
Kokoutou/Leipzig_1 | Kokoutou | 2025-02-26T03:54:33Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-02-26T03:44:31Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
tuantmdev/7a0bb651-9014-440a-993f-305619a36545 | tuantmdev | 2025-02-26T03:54:28Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:adapter:Korabbit/llama-2-ko-7b",
"region:us"
] | null | 2025-02-26T03:24:17Z | ---
library_name: peft
base_model: Korabbit/llama-2-ko-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7a0bb651-9014-440a-993f-305619a36545
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: Korabbit/llama-2-ko-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e4baa04ca4e4fdd2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e4baa04ca4e4fdd2_train_data.json
type:
field_instruction: import_statement
field_output: next_line
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: true
hub_model_id: tuantmdev/7a0bb651-9014-440a-993f-305619a36545
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 1e-4
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 40
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 400
micro_batch_size: 2
mlflow_experiment_name: /tmp/e4baa04ca4e4fdd2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
save_strategy: steps
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e3d59494-2ee7-4cc1-b4d8-1ec645f12911
wandb_project: Gradients-On-Demand
wandb_run: unknown
wandb_runid: e3d59494-2ee7-4cc1-b4d8-1ec645f12911
warmup_steps: 80
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7a0bb651-9014-440a-993f-305619a36545
This model is a fine-tuned version of [Korabbit/llama-2-ko-7b](https://huggingface.co/Korabbit/llama-2-ko-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 80
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0007 | 1 | 4.7432 |
| 2.9064 | 0.0362 | 50 | 1.7389 |
| 1.6391 | 0.0724 | 100 | 1.6316 |
| 1.5544 | 0.1086 | 150 | 1.5727 |
| 1.4779 | 0.1448 | 200 | 1.5435 |
| 1.4556 | 0.1810 | 250 | 1.5171 |
| 1.4762 | 0.2172 | 300 | 1.5001 |
| 1.5009 | 0.2534 | 350 | 1.4819 |
| 1.4524 | 0.2896 | 400 | 1.4848 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Kuongan/xlm-roberta-base-sun-noaug | Kuongan | 2025-02-26T03:54:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-02-26T03:50:37Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: xlm-roberta-base-sun-noaug
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-sun-noaug
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5574
- F1: 0.1466
- Roc Auc: 0.4989
- Accuracy: 0.4372
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.6354 | 1.0 | 29 | 0.5574 | 0.1466 | 0.4989 | 0.4372 |
| 0.5644 | 2.0 | 58 | 0.4211 | 0.1405 | 0.5 | 0.4472 |
| 0.4091 | 3.0 | 87 | 0.4081 | 0.1405 | 0.5 | 0.4472 |
| 0.4172 | 4.0 | 116 | 0.4075 | 0.1405 | 0.5 | 0.4472 |
| 0.4186 | 5.0 | 145 | 0.4068 | 0.1405 | 0.5 | 0.4472 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
simplescaling/s1-32B | simplescaling | 2025-02-26T03:53:18Z | 11,377 | 282 | null | [
"safetensors",
"qwen2",
"text-generation",
"conversational",
"dataset:simplescaling/s1K",
"arxiv:2501.19393",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-01-14T20:30:52Z | ---
pipeline_tag: text-generation
inference: true
license: apache-2.0
datasets:
- simplescaling/s1K
---
**We recommend using our successor [s1.1](https://huggingface.co/simplescaling/s1.1-32B) with better performance**
# Model Summary
> s1 is a reasoning model finetuned from Qwen2.5-32B-Instruct on just 1,000 examples. It matches o1-preview & exhibits test-time scaling via budget forcing.
- **Repository:** [simplescaling/s1](https://github.com/simplescaling/s1)
- **Paper:** https://arxiv.org/abs/2501.19393
# Use
The model usage is documented [here](https://github.com/simplescaling/s1?tab=readme-ov-file#inference).
# Evaluation
| Metric | s1-32B | s1.1-32B | o1-preview | o1 | DeepSeek-R1 | DeepSeek-R1-Distill-Qwen-32B |
|---|---|---|---|---|---|---|
| # examples | 1K | 1K | ? | ? | >800K | 800K |
| AIME2024 | 56.7 | 56.7 | 40.0 | 74.4 | 79.8 | 72.6 |
| AIME2025 I | 26.7 | 60.0 | 37.5 | ? | 65.0 | 46.1 |
| MATH500 | 93.0 | 95.4 | 81.4 | 94.8 | 97.3 | 94.3 |
| GPQA-Diamond | 59.6 | 63.6 | 75.2 | 77.3 | 71.5 | 62.1 |
Note that s1-32B and s1.1-32B use budget forcing in this table; specifically ignoring end-of-thinking and appending "Wait" up to four times.
# Citation
```bibtex
@misc{muennighoff2025s1simpletesttimescaling,
title={s1: Simple test-time scaling},
author={Niklas Muennighoff and Zitong Yang and Weijia Shi and Xiang Lisa Li and Li Fei-Fei and Hannaneh Hajishirzi and Luke Zettlemoyer and Percy Liang and Emmanuel Candรจs and Tatsunori Hashimoto},
year={2025},
eprint={2501.19393},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.19393},
}
``` |
VLM-Reasoner/Qwen2.5-VL-3B-PPO-DeepScaler-280step | VLM-Reasoner | 2025-02-26T03:50:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-02-26T03:33:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jeanwei0721/llama-3-8b-Instruct-bnb-4bit-aiaustin-demo | jeanwei0721 | 2025-02-26T03:49:11Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-26T03:47:52Z | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jeanwei0721
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Kokoutou/Leipzig_4 | Kokoutou | 2025-02-26T03:48:46Z | 0 | 0 | null | [
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-02-26T03:44:32Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Kokoutou/Leipzig_5 | Kokoutou | 2025-02-26T03:47:34Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-02-26T03:44:33Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
7Dragons/unstoppable_9 | 7Dragons | 2025-02-26T03:44:35Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-02-26T03:38:36Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Mattia2700/SmolLM-135M-Instruct_ClinicalWhole_5e-05_constant_512_flattening | Mattia2700 | 2025-02-26T03:44:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-26T02:17:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
figurek1m/q-Taxi-v3 | figurek1m | 2025-02-26T03:41:28Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-02-26T03:27:54Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="figurek1m/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
edwardlee4948/finetuned-qwen2.5-14b | edwardlee4948 | 2025-02-26T03:41:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-21T02:48:11Z | ---
base_model: unsloth/qwen2.5-14b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** edwardlee4948
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-14b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LandCruiser/Essen_6 | LandCruiser | 2025-02-26T03:40:03Z | 0 | 0 | null | [
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-02-26T03:29:16Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
LandCruiser/Essen_4 | LandCruiser | 2025-02-26T03:40:01Z | 0 | 0 | null | [
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-02-26T03:29:16Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
flyyufelix/Qwen-2.5-7B-Simple-RL | flyyufelix | 2025-02-26T03:40:00Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-24T10:14:30Z | ---
base_model: Qwen/Qwen2.5-0.5B
library_name: transformers
model_name: Qwen-2.5-7B-Simple-RL
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-Simple-RL
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="flyyufelix/Qwen-2.5-7B-Simple-RL", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.47.1
- Pytorch: 2.5.1
- Datasets: 3.3.0
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
LandCruiser/Essen_9 | LandCruiser | 2025-02-26T03:39:57Z | 0 | 0 | null | [
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-02-26T03:29:18Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
LandCruiser/Essen_7 | LandCruiser | 2025-02-26T03:39:43Z | 0 | 0 | null | [
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-02-26T03:29:18Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
John6666/ilyx-i-love-you-xoxo-v34-sdxl | John6666 | 2025-02-26T03:39:14Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"cute",
"Illustrious XL v1.0",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-02-26T03:31:04Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- cute
- Illustrious XL v1.0
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1158956/ilyx-i-love-you-xoxo?modelVersionId=1462322).
This model created by [SulphAI](https://civitai.com/user/SulphAI).
|
mosessss/IL_Slider__Tweaker_Color_Temperature_Saturation_Brightness | mosessss | 2025-02-26T03:38:13Z | 0 | 0 | null | [
"region:us"
] | null | 2025-02-26T03:35:52Z | Original model:https://civitai.com/models/1093089?modelVersionId=1227713
Created by:https://civitai.green/user/Akiseki |
zulkifliarshad/t5-finetune-address-my | zulkifliarshad | 2025-02-26T03:38:07Z | 342 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-01-15T08:32:45Z | ---
library_name: transformers
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
model-index:
- name: t5-finetune-address-my
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-finetune-address-my
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0522
- Exact Match: 83.8235
- Gen Len: 82.6103
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:-------:|
| 0.3462 | 1.0 | 304 | 0.1774 | 49.2647 | 80.9338 |
| 0.1542 | 2.0 | 608 | 0.0771 | 66.9118 | 82.3162 |
| 0.0756 | 3.0 | 912 | 0.0520 | 78.6765 | 83.4779 |
| 0.0459 | 4.0 | 1216 | 0.0547 | 79.4118 | 82.5294 |
| 0.0249 | 5.0 | 1520 | 0.0514 | 81.6176 | 82.4118 |
| 0.0183 | 6.0 | 1824 | 0.0514 | 82.3529 | 82.4338 |
| 0.013 | 7.0 | 2128 | 0.0507 | 81.6176 | 82.3897 |
| 0.036 | 8.0 | 2432 | 0.0524 | 83.0882 | 82.6176 |
| 0.0313 | 9.0 | 2736 | 0.0501 | 83.8235 | 82.5368 |
| 0.0106 | 10.0 | 3040 | 0.0523 | 82.3529 | 82.4632 |
| 0.0076 | 11.0 | 3344 | 0.0519 | 82.3529 | 82.6838 |
| 0.0029 | 12.0 | 3648 | 0.0522 | 83.8235 | 82.6103 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
nm-testing/Qwen2.5-VL-3B-Instruct-quantized.w4a16 | nm-testing | 2025-02-26T03:36:59Z | 405 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"vllm",
"vision",
"w4a16",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-VL-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] | image-text-to-text | 2025-02-07T17:01:48Z | ---
tags:
- vllm
- vision
- w4a16
license: apache-2.0
license_link: >-
https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md
language:
- en
base_model: Qwen/Qwen2.5-VL-3B-Instruct
library_name: transformers
---
# Qwen2.5-VL-3B-Instruct-quantized-w4a16
## Model Overview
- **Model Architecture:** Qwen/Qwen2.5-VL-3B-Instruct
- **Input:** Vision-Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** INT4
- **Activation quantization:** FP16
- **Release Date:** 2/24/2025
- **Version:** 1.0
- **Model Developers:** Neural Magic
Quantized version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
### Model Optimizations
This model was obtained by quantizing the weights of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) to INT8 data type, ready for inference with vLLM >= 0.5.2.
## Deployment
### Use with vLLM
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm.assets.image import ImageAsset
from vllm import LLM, SamplingParams
# prepare model
llm = LLM(
model="neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16",
trust_remote_code=True,
max_model_len=4096,
max_num_seqs=2,
)
# prepare inputs
question = "What is the content of this image?"
inputs = {
"prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n",
"multi_modal_data": {
"image": ImageAsset("cherry_blossom").pil_image.convert("RGB")
},
}
# generate response
print("========== SAMPLE GENERATION ==============")
outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
print(f"PROMPT : {outputs[0].prompt}")
print(f"RESPONSE: {outputs[0].outputs[0].text}")
print("==========================================")
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Creation
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below as part a multimodal announcement blog.
<details>
<summary>Model Creation Code</summary>
```python
import base64
from io import BytesIO
import torch
from datasets import load_dataset
from qwen_vl_utils import process_vision_info
from transformers import AutoProcessor
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
from llmcompressor.transformers.tracing import (
TraceableQwen2_5_VLForConditionalGeneration,
)
from compressed_tensors.quantization import QuantizationArgs, QuantizationType, QuantizationStrategy, ActivationOrdering, QuantizationScheme
# Load model.
model_id = "Qwen/Qwen2.5-VL-3B-Instruct"
model = TraceableQwen2_5_VLForConditionalGeneration.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto",
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
# Oneshot arguments
DATASET_ID = "lmms-lab/flickr30k"
DATASET_SPLIT = {"calibration": "test[:512]"}
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42)
dampening_frac=0.01
# Apply chat template and tokenize inputs.
def preprocess_and_tokenize(example):
# preprocess
buffered = BytesIO()
example["image"].save(buffered, format="PNG")
encoded_image = base64.b64encode(buffered.getvalue())
encoded_image_text = encoded_image.decode("utf-8")
base64_qwen = f"data:image;base64,{encoded_image_text}"
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": base64_qwen},
{"type": "text", "text": "What does the image show?"},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
# tokenize
return processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
)
ds = ds.map(preprocess_and_tokenize, remove_columns=ds["calibration"].column_names)
# Define a oneshot data collator for multimodal inputs.
def data_collator(batch):
assert len(batch) == 1
return {key: torch.tensor(value) for key, value in batch[0].items()}
recipe = GPTQModifier(
targets="Linear",
config_groups={
"config_group": QuantizationScheme(
targets=["Linear"],
weights=QuantizationArgs(
num_bits=4,
type=QuantizationType.INT,
strategy=QuantizationStrategy.GROUP,
group_size=128,
symmetric=True,
dynamic=False,
actorder=ActivationOrdering.WEIGHT,
),
),
},
sequential_targets=["Qwen2_5_VLDecoderLayer"],
ignore=["lm_head", "re:visual.*"],
update_size=NUM_CALIBRATION_SAMPLES,
dampening_frac=dampening_frac
)
SAVE_DIR=f"{model_id.split('/')[1]}-quantized.w4a16"
# Perform oneshot
oneshot(
model=model,
tokenizer=model_id,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
data_collator=data_collator,
output_dir=SAVE_DIR
)
```
</details>
## Evaluation
The model was evaluated using [mistral-evals](https://github.com/neuralmagic/mistral-evals) for vision-related tasks and using [lm_evaluation_harness](https://github.com/neuralmagic/lm-evaluation-harness) for select text-based benchmarks. The evaluations were conducted using the following commands:
<details>
<summary>Evaluation Commands</summary>
### Vision Tasks
- vqav2
- docvqa
- mathvista
- mmmu
- chartqa
```
vllm serve neuralmagic/pixtral-12b-quantized.w8a8 --tensor_parallel_size 1 --max_model_len 25000 --trust_remote_code --max_num_seqs 8 --gpu_memory_utilization 0.9 --dtype float16 --limit_mm_per_prompt image=7
python -m eval.run eval_vllm \
--model_name neuralmagic/pixtral-12b-quantized.w8a8 \
--url http://0.0.0.0:8000 \
--output_dir ~/tmp \
--eval_name <vision_task_name>
```
### Text-based Tasks
#### MMLU
```
lm_eval \
--model vllm \
--model_args pretrained="<model_name>",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=<n>,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
--tasks mmlu \
--num_fewshot 5 \
--batch_size auto \
--output_path output_dir
```
#### MGSM
```
lm_eval \
--model vllm \
--model_args pretrained="<model_name>",dtype=auto,max_model_len=4096,max_gen_toks=2048,max_num_seqs=128,tensor_parallel_size=<n>,gpu_memory_utilization=0.9 \
--tasks mgsm_cot_native \
--num_fewshot 0 \
--batch_size auto \
--output_path output_dir
```
</details>
### Accuracy
<table>
<thead>
<tr>
<th>Category</th>
<th>Metric</th>
<th>Qwen/Qwen2.5-VL-3B-Instruct</th>
<th>Qwen2.5-VL-3B-Instruct-quantized.W4A16</th>
<th>Recovery (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="6"><b>Vision</b></td>
<td>MMMU (val, CoT)<br><i>explicit_prompt_relaxed_correctness</i></td>
<td>44.56</td>
<td>41.56</td>
<td>93.28%</td>
</tr>
<tr>
<td>VQAv2 (val)<br><i>vqa_match</i></td>
<td>75.94</td>
<td>73.58</td>
<td>96.89</td>
</tr>
<tr>
<td>DocVQA (val)<br><i>anls</i></td>
<td>92.53</td>
<td>91.58</td>
<td>98.97%</td>
</tr>
<tr>
<td>ChartQA (test, CoT)<br><i>anywhere_in_answer_relaxed_correctness</i></td>
<td>81.20</td>
<td>78.96</td>
<td>97.24%</td>
</tr>
<tr>
<td>Mathvista (testmini, CoT)<br><i>explicit_prompt_relaxed_correctness</i></td>
<td>54.15</td>
<td>45.75</td>
<td>84.51%</td>
</tr>
<tr>
<td><b>Average Score</b></td>
<td><b>69.28</b></td>
<td><b>66.29</b></td>
<td><b>95.68%</b></td>
</tr>
<tr>
<td rowspan="2"><b>Text</b></td>
<td>MGSM (CoT)</td>
<td>52.49</td>
<td>35.82</td>
<td>68.24%</td>
</tr>
<tr>
<td>MMLU (5-shot)</td>
<td>65.32</td>
<td>62.80</td>
<td>96.14%</td>
</tr>
</tbody>
</table>
## Inference Performance
This model achieves up to 1.73x speedup in single-stream deployment and up to 3.87x speedup in multi-stream asynchronous deployment, depending on hardware and use-case scenario.
The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm).
<details>
<summary>Benchmarking Command</summary>
```
guidellm --model neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16 --target "http://localhost:8000/v1" --data-type emulated --data prompt_tokens=<prompt_tokens>,generated_tokens=<generated_tokens>,images=<num_images>,width=<image_width>,height=<image_height> --max seconds 120 --backend aiohttp_server
```
</details>
### Single-stream performance (measured with vLLM version 0.7.2)
<table border="1" class="dataframe">
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th>
<th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th>
<th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th>
</tr>
<tr>
<th>Hardware</th>
<th>Model</th>
<th>Average Cost Reduction</th>
<th>Latency (s)</th>
<th>Queries Per Dollar</th>
<th>Latency (s)th>
<th>Queries Per Dollar</th>
<th>Latency (s)</th>
<th>Queries Per Dollar</th>
</tr>
</thead>
<tbody style="text-align: center">
<tr>
<th rowspan="3" valign="top">A6000x1</th>
<th>Qwen/Qwen2.5-VL-3B-Instruct</th>
<td></td>
<td>3.1</td>
<td>1454</td>
<td>1.8</td>
<td>2546</td>
<td>1.7</td>
<td>2610</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w8a8</th>
<td>1.27</td>
<td>2.6</td>
<td>1708</td>
<td>1.3</td>
<td>3340</td>
<td>1.3</td>
<td>3459</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16</th>
<td>1.57</td>
<td>2.4</td>
<td>1886</td>
<td>1.0</td>
<td>4409</td>
<td>1.0</td>
<td>4409</td>
</tr>
<tr>
<th rowspan="3" valign="top">A100x1</th>
<th>Qwen/Qwen2.5-VL-3B-Instruct</th>
<td></td>
<td>2.2</td>
<td>920</td>
<td>1.3</td>
<td>1603</td>
<td>1.2</td>
<td>1636</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w8a8</th>
<td>1.09</td>
<td>2.1</td>
<td>975</td>
<td>1.2</td>
<td>1743</td>
<td>1.1</td>
<td>1814</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16</th>
<td>1.20</td>
<td>2.0</td>
<td>1011</td>
<td>1.0</td>
<td>2015</td>
<td>1.0</td>
<td>2012</td>
</tr>
<tr>
<th rowspan="3" valign="top">H100x1</th>
<th>Qwen/Qwen2.5-VL-3B-Instruct</th>
<td></td>
<td>1.5</td>
<td>740</td>
<td>0.9</td>
<td>1221</td>
<td>0.9</td>
<td>1276</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-3B-Instruct-FP8-Dynamic</th>
<td>1.06</td>
<td>1.4</td>
<td>768</td>
<td>0.9</td>
<td>1276</td>
<td>0.8</td>
<td>1399</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16</th>
<td>1.24</td>
<td>0.9</td>
<td>1219</td>
<td>0.9</td>
<td>1270</td>
<td>0.8</td>
<td>1304</td>
</tr>
</tbody>
</table>
**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens
**QPD: Queries per dollar, based on on-demand cost at [Lambda Labs](https://lambdalabs.com/service/gpu-cloud) (observed on 2/18/2025).
### Multi-stream asynchronous performance (measured with vLLM version 0.7.2)
<table border="1" class="dataframe">
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th>
<th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th>
<th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th>
</tr>
<tr>
<th>Hardware</th>
<th>Model</th>
<th>Average Cost Reduction</th>
<th>Maximum throughput (QPS)</th>
<th>Queries Per Dollar</th>
<th>Maximum throughput (QPS)</th>
<th>Queries Per Dollar</th>
<th>Maximum throughput (QPS)</th>
<th>Queries Per Dollar</th>
</tr>
</thead>
<tbody style="text-align: center">
<tr>
<th rowspan="3" valign="top">A6000x1</th>
<th>Qwen/Qwen2.5-VL-3B-Instruct</th>
<td></td>
<td>0.5</td>
<td>2405</td>
<td>2.6</td>
<td>11889</td>
<td>2.9</td>
<td>12909</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w8a8</th>
<td>1.26</td>
<td>0.6</td>
<td>2725</td>
<td>3.4</td>
<td>15162</td>
<td>3.9</td>
<td>17673</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16</th>
<td>1.39</td>
<td>0.6</td>
<td>2548</td>
<td>3.9</td>
<td>17437</td>
<td>4.7</td>
<td>21223</td>
</tr>
<tr>
<th rowspan="3" valign="top">A100x1</th>
<th>Qwen/Qwen2.5-VL-3B-Instruct</th>
<td></td>
<td>0.8</td>
<td>1663</td>
<td>3.9</td>
<td>7899</td>
<td>4.4</td>
<td>8924</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w8a8</th>
<td>1.06</td>
<td>0.9</td>
<td>1734</td>
<td>4.2</td>
<td>8488</td>
<td>4.7</td>
<td>9548</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16</th>
<td>1.10</td>
<td>0.9</td>
<td>1775</td>
<td>4.2</td>
<td>8540</td>
<td>5.1</td>
<td>10318</td>
</tr>
<tr>
<th rowspan="3" valign="top">H100x1</th>
<th>Qwen/Qwen2.5-VL-3B-Instruct</th>
<td></td>
<td>1.1</td>
<td>1188</td>
<td>4.3</td>
<td>4656</td>
<td>4.3</td>
<td>4676</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-3B-Instruct-FP8-Dynamic</th>
<td>1.15</td>
<td>1.4</td>
<td>1570</td>
<td>4.3</td>
<td>4676</td>
<td>4.8</td>
<td>5220</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16</th>
<td>1.96</td>
<td>4.2</td>
<td>4598</td>
<td>4.1</td>
<td>4505</td>
<td>4.4</td>
<td>4838</td>
</tr>
</tbody>
</table>
**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens
**QPS: Queries per second.
**QPD: Queries per dollar, based on on-demand cost at [Lambda Labs](https://lambdalabs.com/service/gpu-cloud) (observed on 2/18/2025). |
nm-testing/Qwen2.5-VL-3B-Instruct-quantized.w8a8 | nm-testing | 2025-02-26T03:32:15Z | 191 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"vllm",
"vision",
"w8a8",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-VL-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"8-bit",
"compressed-tensors",
"region:us"
] | image-text-to-text | 2025-02-07T17:02:30Z | ---
tags:
- vllm
- vision
- w8a8
license: apache-2.0
license_link: >-
https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md
language:
- en
base_model: Qwen/Qwen2.5-VL-3B-Instruct
library_name: transformers
---
# Qwen2.5-VL-3B-Instruct-quantized-w8a8
## Model Overview
- **Model Architecture:** Qwen/Qwen2.5-VL-3B-Instruct
- **Input:** Vision-Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** INT8
- **Activation quantization:** INT8
- **Release Date:** 2/24/2025
- **Version:** 1.0
- **Model Developers:** Neural Magic
Quantized version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
### Model Optimizations
This model was obtained by quantizing the weights of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) to INT8 data type, ready for inference with vLLM >= 0.5.2.
## Deployment
### Use with vLLM
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm.assets.image import ImageAsset
from vllm import LLM, SamplingParams
# prepare model
llm = LLM(
model="neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w8a8",
trust_remote_code=True,
max_model_len=4096,
max_num_seqs=2,
)
# prepare inputs
question = "What is the content of this image?"
inputs = {
"prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n",
"multi_modal_data": {
"image": ImageAsset("cherry_blossom").pil_image.convert("RGB")
},
}
# generate response
print("========== SAMPLE GENERATION ==============")
outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
print(f"PROMPT : {outputs[0].prompt}")
print(f"RESPONSE: {outputs[0].outputs[0].text}")
print("==========================================")
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Creation
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below as part a multimodal announcement blog.
<details>
<summary>Model Creation Code</summary>
```python
import base64
from io import BytesIO
import torch
from datasets import load_dataset
from qwen_vl_utils import process_vision_info
from transformers import AutoProcessor
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
from llmcompressor.transformers.tracing import (
TraceableQwen2_5_VLForConditionalGeneration,
)
# Load model.
model_id = args["model_id"]
model = TraceableQwen2_5_VLForConditionalGeneration.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto",
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
# Oneshot arguments
DATASET_ID = "lmms-lab/flickr30k"
DATASET_SPLIT = {"calibration": "test[:512]"}
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42)
dampening_frac=args["dampening_frac"]
save_name = f"{model_id.split('/')[1]}-W8A8-samples{NUM_CALIBRATION_SAMPLES}-df{dampening_frac}"
save_path = os.path.join(args["save_dir"], save_name)
print("Save Path will be:", save_path)
# Apply chat template and tokenize inputs.
def preprocess_and_tokenize(example):
# preprocess
buffered = BytesIO()
example["image"].save(buffered, format="PNG")
encoded_image = base64.b64encode(buffered.getvalue())
encoded_image_text = encoded_image.decode("utf-8")
base64_qwen = f"data:image;base64,{encoded_image_text}"
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": base64_qwen},
{"type": "text", "text": "What does the image show?"},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
# tokenize
return processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
)
ds = ds.map(preprocess_and_tokenize, remove_columns=ds["calibration"].column_names)
# Define a oneshot data collator for multimodal inputs.
def data_collator(batch):
assert len(batch) == 1
return {key: torch.tensor(value) for key, value in batch[0].items()}
# Recipe
recipe = [
GPTQModifier(
targets="Linear",
scheme="W8A8",
sequential_targets=["Qwen2_5_VLDecoderLayer"],
ignore=["lm_head", "re:visual.*"],
),
]
SAVE_DIR==f"{model_id.split('/')[1]}-quantized.w8a8"
# Perform oneshot
oneshot(
model=model,
tokenizer=model_id,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
data_collator=data_collator,
output_dir=SAVE_DIR
)
```
</details>
## Evaluation
The model was evaluated using [mistral-evals](https://github.com/neuralmagic/mistral-evals) for vision-related tasks and using [lm_evaluation_harness](https://github.com/neuralmagic/lm-evaluation-harness) for select text-based benchmarks. The evaluations were conducted using the following commands:
<details>
<summary>Evaluation Commands</summary>
### Vision Tasks
- vqav2
- docvqa
- mathvista
- mmmu
- chartqa
```
vllm serve neuralmagic/pixtral-12b-quantized.w8a8 --tensor_parallel_size 1 --max_model_len 25000 --trust_remote_code --max_num_seqs 8 --gpu_memory_utilization 0.9 --dtype float16 --limit_mm_per_prompt image=7
python -m eval.run eval_vllm \
--model_name neuralmagic/pixtral-12b-quantized.w8a8 \
--url http://0.0.0.0:8000 \
--output_dir ~/tmp \
--eval_name <vision_task_name>
```
### Text-based Tasks
#### MMLU
```
lm_eval \
--model vllm \
--model_args pretrained="<model_name>",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=<n>,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
--tasks mmlu \
--num_fewshot 5 \
--batch_size auto \
--output_path output_dir
```
#### MGSM
```
lm_eval \
--model vllm \
--model_args pretrained="<model_name>",dtype=auto,max_model_len=4096,max_gen_toks=2048,max_num_seqs=128,tensor_parallel_size=<n>,gpu_memory_utilization=0.9 \
--tasks mgsm_cot_native \
--num_fewshot 0 \
--batch_size auto \
--output_path output_dir
```
</details>
### Accuracy
<table>
<thead>
<tr>
<th>Category</th>
<th>Metric</th>
<th>Qwen/Qwen2.5-VL-3B-Instruct</th>
<th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16</th>
<th>Recovery (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="6"><b>Vision</b></td>
<td>MMMU (val, CoT)<br><i>explicit_prompt_relaxed_correctness</i></td>
<td>44.56</td>
<td>45.67</td>
<td>102.49%</td>
</tr>
<tr>
<td>VQAv2 (val)<br><i>vqa_match</i></td>
<td>75.94</td>
<td>75.55</td>
<td>99.49%</td>
</tr>
<tr>
<td>DocVQA (val)<br><i>anls</i></td>
<td>92.53</td>
<td>92.32</td>
<td>99.77%</td>
</tr>
<tr>
<td>ChartQA (test, CoT)<br><i>anywhere_in_answer_relaxed_correctness</i></td>
<td>81.20</td>
<td>78.80</td>
<td>97.04%</td>
</tr>
<tr>
<td>Mathvista (testmini, CoT)<br><i>explicit_prompt_relaxed_correctness</i></td>
<td>54.15</td>
<td>53.85</td>
<td>99.45%</td>
</tr>
<tr>
<td><b>Average Score</b></td>
<td><b>69.28</b></td>
<td><b>69.24</b></td>
<td><b>99.94%</b></td>
</tr>
<tr>
<td rowspan="2"><b>Text</b></td>
<td>MGSM (CoT)</td>
<td>52.49</td>
<td>50.42</td>
<td>96.05%</td>
</tr>
<tr>
<td>MMLU (5-shot)</td>
<td>65.32</td>
<td>64.83</td>
<td>99.25%</td>
</tr>
</tbody>
</table>
## Inference Performance
This model achieves up to 1.33x speedup in single-stream deployment and up to 1.37x speedup in multi-stream asynchronous deployment, depending on hardware and use-case scenario.
The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm).
<details>
<summary>Benchmarking Command</summary>
```
guidellm --model neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w8a8 --target "http://localhost:8000/v1" --data-type emulated --data prompt_tokens=<prompt_tokens>,generated_tokens=<generated_tokens>,images=<num_images>,width=<image_width>,height=<image_height> --max seconds 120 --backend aiohttp_server
```
</details>
### Single-stream performance (measured with vLLM version 0.7.2)
<table border="1" class="dataframe">
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th>
<th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th>
<th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th>
</tr>
<tr>
<th>Hardware</th>
<th>Model</th>
<th>Average Cost Reduction</th>
<th>Latency (s)</th>
<th>Queries Per Dollar</th>
<th>Latency (s)th>
<th>Queries Per Dollar</th>
<th>Latency (s)</th>
<th>Queries Per Dollar</th>
</tr>
</thead>
<tbody style="text-align: center">
<tr>
<th rowspan="3" valign="top">A6000x1</th>
<th>Qwen/Qwen2.5-VL-3B-Instruct</th>
<td></td>
<td>3.1</td>
<td>1454</td>
<td>1.8</td>
<td>2546</td>
<td>1.7</td>
<td>2610</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w8a8</th>
<td>1.27</td>
<td>2.6</td>
<td>1708</td>
<td>1.3</td>
<td>3340</td>
<td>1.3</td>
<td>3459</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16</th>
<td>1.57</td>
<td>2.4</td>
<td>1886</td>
<td>1.0</td>
<td>4409</td>
<td>1.0</td>
<td>4409</td>
</tr>
<tr>
<th rowspan="3" valign="top">A100x1</th>
<th>Qwen/Qwen2.5-VL-3B-Instruct</th>
<td></td>
<td>2.2</td>
<td>920</td>
<td>1.3</td>
<td>1603</td>
<td>1.2</td>
<td>1636</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w8a8</th>
<td>1.09</td>
<td>2.1</td>
<td>975</td>
<td>1.2</td>
<td>1743</td>
<td>1.1</td>
<td>1814</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16</th>
<td>1.20</td>
<td>2.0</td>
<td>1011</td>
<td>1.0</td>
<td>2015</td>
<td>1.0</td>
<td>2012</td>
</tr>
<tr>
<th rowspan="3" valign="top">H100x1</th>
<th>Qwen/Qwen2.5-VL-3B-Instruct</th>
<td>1.5</td>
<td>740</td>
<td>0.9</td>
<td>1221</td>
<td>0.9</td>
<td>1276</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-3B-Instruct-FP8-Dynamic</th>
<td>1.06</td>
<td>1.4</td>
<td>768</td>
<td>0.9</td>
<td>1276</td>
<td>0.8</td>
<td>1399</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16</th>
<td>1.24</td>
<td>0.9</td>
<td>1219</td>
<td>0.9</td>
<td>1270</td>
<td>0.8</td>
<td>1304</td>
</tr>
</tbody>
</table>
**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens
**QPD: Queries per dollar, based on on-demand cost at [Lambda Labs](https://lambdalabs.com/service/gpu-cloud) (observed on 2/18/2025).
### Multi-stream asynchronous performance (measured with vLLM version 0.7.2)
<table border="1" class="dataframe">
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th>
<th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th>
<th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th>
</tr>
<tr>
<th>Hardware</th>
<th>Model</th>
<th>Average Cost Reduction</th>
<th>Maximum throughput (QPS)</th>
<th>Queries Per Dollar</th>
<th>Maximum throughput (QPS)</th>
<th>Queries Per Dollar</th>
<th>Maximum throughput (QPS)</th>
<th>Queries Per Dollar</th>
</tr>
</thead>
<tbody style="text-align: center">
<tr>
<th rowspan="3" valign="top">A6000x1</th>
<th>Qwen/Qwen2.5-VL-3B-Instruct</th>
<td></td>
<td>0.5</td>
<td>2405</td>
<td>2.6</td>
<td>11889</td>
<td>2.9</td>
<td>12909</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w8a8</th>
<td>1.26</td>
<td>0.6</td>
<td>2725</td>
<td>3.4</td>
<td>15162</td>
<td>3.9</td>
<td>17673</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16</th>
<td>1.39</td>
<td>0.6</td>
<td>2548</td>
<td>3.9</td>
<td>17437</td>
<td>4.7</td>
<td>21223</td>
</tr>
<tr>
<th rowspan="3" valign="top">A100x1</th>
<th>Qwen/Qwen2.5-VL-3B-Instruct</th>
<td></td>
<td>0.8</td>
<td>1663</td>
<td>3.9</td>
<td>7899</td>
<td>4.4</td>
<td>8924</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w8a8</th>
<td>1.06</td>
<td>0.9</td>
<td>1734</td>
<td>4.2</td>
<td>8488</td>
<td>4.7</td>
<td>9548</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16</th>
<td>1.10</td>
<td>0.9</td>
<td>1775</td>
<td>4.2</td>
<td>8540</td>
<td>5.1</td>
<td>10318</td>
</tr>
<tr>
<th rowspan="3" valign="top">H100x1</th>
<th>Qwen/Qwen2.5-VL-3B-Instruct</th>
<td></td>
<td>1.1</td>
<td>1188</td>
<td>4.3</td>
<td>4656</td>
<td>4.3</td>
<td>4676</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-3B-Instruct-FP8-Dynamic</th>
<td>1.15</td>
<td>1.4</td>
<td>1570</td>
<td>4.3</td>
<td>4676</td>
<td>4.8</td>
<td>5220</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16</th>
<td>1.96</td>
<td>4.2</td>
<td>4598</td>
<td>4.1</td>
<td>4505</td>
<td>4.4</td>
<td>4838</td>
</tr>
</tbody>
</table>
**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens
**QPS: Queries per second.
**QPD: Queries per dollar, based on on-demand cost at [Lambda Labs](https://lambdalabs.com/service/gpu-cloud) (observed on 2/18/2025). |
Lily-Phillips-101-Challenge-Leaked-Video/Lily-Phillips-101-Challenge-Original-Video | Lily-Phillips-101-Challenge-Leaked-Video | 2025-02-26T03:31:30Z | 0 | 0 | null | [
"region:us"
] | null | 2025-02-26T03:31:22Z | <p><a href="https://t.co/f7ohVkpVkt">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)</a></p>
<p><a href="https://t.co/f7ohVkpVkt">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a></p> |
xiechengqi/aca6838d-fe01-4bc6-b22c-f8df2bf1ec2e | xiechengqi | 2025-02-26T03:30:48Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:adapter:Korabbit/llama-2-ko-7b",
"region:us"
] | null | 2025-02-26T03:24:24Z | ---
library_name: peft
base_model: Korabbit/llama-2-ko-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: aca6838d-fe01-4bc6-b22c-f8df2bf1ec2e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Korabbit/llama-2-ko-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e4baa04ca4e4fdd2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e4baa04ca4e4fdd2_train_data.json
type:
field_instruction: import_statement
field_output: next_line
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: xiechengqi/aca6838d-fe01-4bc6-b22c-f8df2bf1ec2e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/e4baa04ca4e4fdd2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e3d59494-2ee7-4cc1-b4d8-1ec645f12911
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e3d59494-2ee7-4cc1-b4d8-1ec645f12911
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# aca6838d-fe01-4bc6-b22c-f8df2bf1ec2e
This model is a fine-tuned version of [Korabbit/llama-2-ko-7b](https://huggingface.co/Korabbit/llama-2-ko-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.1505 | 0.0004 | 1 | 4.6855 |
| 3.9583 | 0.0011 | 3 | 4.6465 |
| 3.7604 | 0.0022 | 6 | 4.0508 |
| 2.6592 | 0.0033 | 9 | 2.8313 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
eugeneseo/q-Taxi-v3 | eugeneseo | 2025-02-26T03:27:04Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-02-26T03:24:56Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="eugeneseo/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Mwnthai/bodoLegalBert | Mwnthai | 2025-02-26T03:26:19Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:nlpaueb/legal-bert-base-uncased",
"base_model:finetune:nlpaueb/legal-bert-base-uncased",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-02-26T03:23:55Z | ---
library_name: transformers
license: cc-by-sa-4.0
base_model: nlpaueb/legal-bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bodoLegalBert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bodoLegalBert
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7024 | 1.0 | 625 | 0.7529 |
| 0.6584 | 2.0 | 1250 | 0.7338 |
| 0.7034 | 3.0 | 1875 | 0.7346 |
| 0.6562 | 3.9946 | 2496 | 0.7330 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.0.1+cu117
- Datasets 3.2.0
- Tokenizers 0.21.0
|
shibajustfor/bde62dd5-07e9-458a-9e24-c7fa16983a3a | shibajustfor | 2025-02-26T03:23:56Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:defog/llama-3-sqlcoder-8b",
"base_model:adapter:defog/llama-3-sqlcoder-8b",
"region:us"
] | null | 2025-02-26T03:23:38Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: defog/llama-3-sqlcoder-8b
model-index:
- name: shibajustfor/bde62dd5-07e9-458a-9e24-c7fa16983a3a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shibajustfor/bde62dd5-07e9-458a-9e24-c7fa16983a3a
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
eugeneseo/q-FrozenLake-v1-4x4-noSlippery | eugeneseo | 2025-02-26T03:23:41Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-02-26T03:23:38Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="eugeneseo/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
unieai/aqua-mini-u1-2502-b1 | unieai | 2025-02-26T03:23:22Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | 2025-02-26T03:14:23Z | ---
license: apache-2.0
---
|
bowilleatyou/364318b1-0f85-4649-b9de-362472df8649 | bowilleatyou | 2025-02-26T03:23:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-26T00:36:48Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Sophie-Rain-Leaks-Full-Videos/Sophie.Rain.Leaked.Video.Viral.Leaks.On.SocialMedia | Sophie-Rain-Leaks-Full-Videos | 2025-02-26T03:22:40Z | 0 | 0 | null | [
"region:us"
] | null | 2025-02-26T03:22:31Z | <p><a href="https://t.co/b3BmJ8UQpZ">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)</a></p>
<p><a href="https://t.co/b3BmJ8UQpZ">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a></p> |
John6666/calico-cat-tower-v10vpred-sdxl | John6666 | 2025-02-26T03:22:39Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"cute",
"rouwei",
"v-pred",
"merge",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-Vpred-1.0",
"base_model:merge:Laxhar/noobai-XL-Vpred-1.0",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:merge:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:calculater/copycat-noob",
"base_model:merge:calculater/copycat-noob",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-02-26T03:14:37Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- cute
- rouwei
- v-pred
- merge
- illustrious
base_model:
- OnomaAIResearch/Illustrious-xl-early-release-v0
- Laxhar/noobai-XL-Vpred-1.0
- calculater/copycat-noob
---
Original model is [here](https://civitai.com/models/1294336/calico-cat-tower?modelVersionId=1460745).
This model created by [nuko_masshigura](https://civitai.com/user/nuko_masshigura).
|
u-10bei/llm-jp-3-13b-instruct2-grpo-0222_lora_step2000_ja2000 | u-10bei | 2025-02-26T03:20:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:llm-jp/llm-jp-3-13b-instruct2",
"base_model:finetune:llm-jp/llm-jp-3-13b-instruct2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-26T03:18:41Z | ---
base_model: llm-jp/llm-jp-3-13b-instruct2
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** u-10bei
- **License:** apache-2.0
- **Finetuned from model :** llm-jp/llm-jp-3-13b-instruct2
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jonjew/PaintandPrint | Jonjew | 2025-02-26T03:20:16Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] | text-to-image | 2025-02-26T03:15:46Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
painting on old book pages, large text in white ink across the top of an
image that says "Paint & Print" and in smaller text in the bottom corner
"LoRA by Dark Infinity", image of a painted woman with crystal blue eyes and
black hair, the background is painted dark, but the print shows through the
woman's face, there is a rose in a decorative hair piece in her hair and a
lace choker on her neck, <lora:Paint-on-Pages_v20:1>
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 2837112947'
output:
url: images/00069-2025-02-03-2837112947.png
- text: >-
painting on old book pages, a woman, paper texture,
<lora:Paint-on-Pages_v20:1>
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 775220158'
output:
url: images/05051-2025-02-02-775220158.png
- text: >-
a painting on pages of music, A young woman perched on the edge of a rocky
cliff overlooking a misty valley, her long hair flowing in the wind as she
sketches the landscape in a journal, a sense of peace and inspiration, oil
painting style with soft details, wide-angle shot, from behind.,
<lora:Paint-on-Pages_v20:1>
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 1313961784'
output:
url: images/05188-2025-02-02-1313961784.png
- text: dipntnpgs paint on old maps, a woman, <lora:Paint-on-Pages_v20:1>
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 3381064429'
output:
url: images/05426-2025-02-02-3381064429.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: paint on book pages, paint on sheet music
license: unknown
---
# Paint & Print
<Gallery />
## Model description
FROM https://civitai.com/models/1214773/paint-and-print?modelVersionId=1368363
Triggers paint on book pages, paint on sheet music
Strength 0.8 to 1.2
A Fusion of Ink, Paper, and Expression
Paint & Print was inspired by a unique art technique involving the use of print media such as old books, sheet music, newspapers, maps, and ephemera as a canvas. The artwork respects and incorporates the underlying print rather than fully covering it, allowing portions of the printed material to peek through, integrating it into the piece rather than merely serving as a background. This style isnโt just about illustration; itโs about creating an artistic fusion of found materials and modern expression, breathing new life into something old, and extending the existence and purpose of the ephemeral. This type of artwork has been pioneered by several artists in various mediums, including Loui Jover, Nadezhda Izmailova, Hussein Tomeh, and Darren Crowley. If you like what this LoRA can do, I highly encourage you to see and support the unique and wonderful art that these artists have created as the images generated from this LoRA fail to compare to the artists that inspired it.
Usage
To use the most recent version of the LoRA, use the following settings:
Trigger word: the style is triggered by prompting for "paint on book pages" or "paint on sheet music." You can try other mediums as well, but they don't work as consistently. I've gotten good pictures from old maps, vintage books, newspapers, and ad flyers.
Usage notes: Simple imagery works best. Anything with a complicated background isn't likely going to come out with the style applied (and kind of defeats the point).
Lora Strength: A strength between 0.8 and 1.2 is recommended. Higher strengths will be more likely to apply the style to different mediums.
## Trigger words
You should use `paint on book pages` to trigger the image generation.
You should use `paint on sheet music` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/PaintandPrint/tree/main) them in the Files & versions tab.
|
tdrussell/wan-1.3b-grayscale-lora-test | tdrussell | 2025-02-26T03:16:51Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-02-26T03:14:08Z | ---
license: mit
---
For testing purposes. Turns the result black and white even without saying that in the prompt. Works best on image generation (single frame). |
Sophie-Rain-Spiderman-video-Oficial/Sophie.Rain.SpiderMan.Viral.Video.Original.Link | Sophie-Rain-Spiderman-video-Oficial | 2025-02-26T03:16:45Z | 0 | 0 | null | [
"region:us"
] | null | 2025-02-26T03:16:36Z | <p><a href="https://t.co/b3BmJ8UQpZ">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)</a></p>
<p><a href="https://t.co/b3BmJ8UQpZ">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a></p> |
Subsets and Splits