modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-13 06:28:01
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 518
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-13 06:25:04
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
sainoforce/modelv9 | sainoforce | 2025-03-31T12:54:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T10:46:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
litmudoc/QwQ-coder-32B-MLX-Q6 | litmudoc | 2025-03-31T12:51:48Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen2",
"mergekit",
"merge",
"text-generation",
"conversational",
"base_model:YOYO-AI/QwQ-coder-32B",
"base_model:quantized:YOYO-AI/QwQ-coder-32B",
"6-bit",
"region:us"
]
| text-generation | 2025-03-31T12:43:47Z | ---
base_model: YOYO-AI/QwQ-coder-32B
library_name: mlx
tags:
- mergekit
- merge
- mlx
pipeline_tag: text-generation
---
# litmudoc/QwQ-coder-32B-MLX-Q6
This model [litmudoc/QwQ-coder-32B-MLX-Q6](https://huggingface.co/litmudoc/QwQ-coder-32B-MLX-Q6) was
converted to MLX format from [YOYO-AI/QwQ-coder-32B](https://huggingface.co/YOYO-AI/QwQ-coder-32B)
using mlx-lm version **0.22.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("litmudoc/QwQ-coder-32B-MLX-Q6")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Jonjew/HaileyBlackThornTheFirstDescendant | Jonjew | 2025-03-31T12:51:17Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
]
| text-to-image | 2025-03-31T12:50:57Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
cinematic still TFD-Hailey-BlackThorn, Hailey, futuristic cyberpunk woman,
sitting on a rock on the edge of a cliff overlooking a futuristic city view
at night, starry sky. cinematic lighting and deep shadows. She has straight,
shoulder-length black hair with a side part swaying in the wind and wears a
black leather bodysuit with a deep neckline that reveals ample cleavage. The
bodysuit is adorned with spiky metallic accents and intricate circuitry
patterns. Hailey's legs are covered in black fishnet stockings. She wears
black fingerless gloves and thigh-high boots with silver spikes and neon
blue highlights. She has black, futuristic shoulder pads with glowing blue
accents. She also wears eyecover over her eyes, adding to her rebellious
look. Her bold red lipstick enhances her fierce expression. Highly
detailed, sci-fi, cyberpunk aesthetic. <lora:TFD-Hailey-BlackThorn:0.7> .
emotional, harmonious, vignette, highly detailed, high budget, bokeh,
cinemascope, moody, epic, gorgeous, film grain, grainy
parameters:
negative_prompt: >-
anime, cartoon, graphic, text, painting, crayon, graphite, abstract,
glitch, deformed, mutated, ugly, disfigured
output:
url: images/00061-1136498592.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: TFD-Hailey-BlackThorn
license: unknown
---
# Hailey - Black Thorn - The First Descendant
<Gallery />
## Model description
FROM https://civitai.com/models/1213964/hailey-black-thorn-the-first-descendant
Trigger TFD-Hailey-BlackThorn
Strength 1
A (FLUX) Character LoRA for Hailey from The First Descendant -videogame.
Also check out my other TFD Character LoRAs below!
Triggerword: TFD-Hailey-BlackThorn
Suggested Weight: 0.7 to 1.0
My Preview Images Generated on:
-flux1-dev-Q8_0.gguf + t5xxl_fp16 (ForgeUI)
-Euler, Simple or Normal
-5:8 or 9:16, 1.25x Hires. Fix (4x-UltraSharp -upscaler)
-Distilled CFG Scale: 3.5
-Only This LoRA enabled
Add the following to your prompt to help you get the full outfit:
TFD-Hailey-BlackThorn. She has straight, shoulder-length black hair and teal ear clasps. She is wearing a futuristic, black bodysuit with metallic and teal glowing details and high neckline. The low-cut design of the bodysuit reveals her cleavage. She is wearing a black visor that covers her eyes, adding to her cyberpunk aesthetic. She has spiked shoulder pads and fur accents that look tough and edgy. She is wearing high-heeled boots with turquoise details and metal spikes. Her lower body is covered in black fishnet stockings covering her thighs.
## Trigger words
You should use `TFD-Hailey-BlackThorn` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/HaileyBlackThornTheFirstDescendant/tree/main) them in the Files & versions tab.
|
Delta-Vector/Archaeo-32B-EXL2 | Delta-Vector | 2025-03-31T12:51:14Z | 0 | 0 | transformers | [
"transformers",
"roleplay",
"creative-writing",
"merge",
"mergekit",
"text-generation",
"base_model:Delta-Vector/Hamanasu-Magnum-QwQ-32B",
"base_model:merge:Delta-Vector/Hamanasu-Magnum-QwQ-32B",
"base_model:Sao10K/32B-Qwen2.5-Kunou-v1",
"base_model:merge:Sao10K/32B-Qwen2.5-Kunou-v1",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-30T07:54:50Z | ---
tags:
- roleplay
- creative-writing
- merge
- mergekit
base_model:
- Delta-Vector/Hamanasu-Magnum-QwQ-32B
- Sao10K/32B-Qwen2.5-Kunou-v1
pipeline_tag: text-generation
library_name: transformers
---
```
__~a~_
~~; ~_
_ ~ ~_ _
'_\;__._._._._._._] ~_._._._._._.__;/_`
'(/'/'/'/'|'|'|'| ( )|'|'|'|'\'\'\'\)'
(/ / / /, | | | |(/ \) | | | ,\ \ \ \)
(/ / / / / | | | ~(/ \) ~ | | \ \ \ \ \)
(/ / / / / ~ ~ ~ (/ \) ~ ~ \ \ \ \ \)
(/ / / / ~ / (||)| ~ \ \ \ \)
~ / / ~ M /||\M ~ \ \ ~
~ ~ /||\ ~ ~
//||\\
//||\\
//||\\
'/||\' "Archaeopteryx"
```
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style>
@import url('https://fonts.googleapis.com/css2?family=VT323&display=swap');
body {
background: #0a0017;
margin: 0;
padding: 20px;
font-family: 'VT323', monospace;
color: #ff00aa;
text-shadow: 0 0 8px #ff00aa;
animation: glitch-flicker 0.2s infinite alternate;
}
@keyframes glitch-flicker {
0% { text-shadow: 0 0 5px #ff00aa, 0 0 15px #ff00aa; }
100% { text-shadow: 0 0 8px #ff0066, 0 0 18px #ff0066; }
}
.crt-container {
padding: 10px;
max-width: 900px;
margin: auto;
}
.crt-case {
background: linear-gradient(135deg, #130021, #20002c);
border-radius: 10px;
padding: 15px;
box-shadow:
inset 2px 2px 10px rgba(255,0,170,0.5),
2px 2px 5px rgba(255,0,170,0.3),
0 0 25px rgba(255,0,170,0.2);
}
.crt-screen {
background: #0c011a;
padding: 20px;
border-radius: 10px;
box-shadow:
inset 0 0 25px rgba(255,0,170,0.3),
0 0 15px rgba(255,0,170,0.7);
filter: contrast(1.2) brightness(1.2);
text-shadow: 0px 0px 5px #ff00aa;
animation: glow-pulse 3s infinite alternate;
}
@keyframes glow-pulse {
0% { box-shadow: inset 0 0 20px rgba(255,0,170,0.3), 0 0 15px rgba(255,0,170,0.3); }
100% { box-shadow: inset 0 0 30px rgba(255,0,170,0.5), 0 0 25px rgba(255,0,170,0.5); }
}
h2 {
color: #ff33cc;
text-align: center;
font-size: 28px;
text-shadow:
0 0 8px #ff33cc,
0 0 18px #ff0044;
}
pre {
background: rgba(255,0,170,0.1);
padding: 10px;
border-radius: 10px;
color: #ff66cc;
font-size: 14px;
box-shadow: inset 0 0 10px rgba(255,0,170,0.5);
}
.glitch {
animation: text-glitch 0.5s infinite alternate;
}
@keyframes text-glitch {
0% { transform: translateX(-2px); text-shadow: 0 0 5px #ff0066, 0 0 10px #ff33cc; }
100% { transform: translateX(2px); text-shadow: 0 0 8px #ff00aa, 0 0 20px #ff0099; }
}
.neon-link {
color: #ff66cc;
text-decoration: none;
transition: text-shadow 0.3s ease;
}
.neon-link:hover {
text-shadow: 0px 0px 15px #ff66cc, 0 0 25px rgba(255,0,170,0.5);
}
.ascii-art {
text-align: center;
font-size: 12px;
color: #ff33cc;
text-shadow: 0px 0px 5px #ff00ff;
margin-bottom: 20px;
}
.quantso-container {
display: flex;
justify-content: center;
gap: 20px;
margin-top: 20px;
}
.quantso-box {
background: rgba(255,0,170,0.1);
padding: 15px;
border-radius: 10px;
text-align: center;
box-shadow: inset 0 0 10px rgba(255,0,170,0.5);
flex: 1;
max-width: 150px;
}
</style>
</head>
<body>
<div class="crt-container">
<div class="crt-case">
<div class="crt-screen">
<p>THESE ARE EXL2 QUANTS, LOOK IN THE REVISIONS FOR THE QUANTS, MAIN BRANCH CONTAINS MEASUREMENT.</p>
<p>A series of Merges made for Roleplaying & Creative Writing, This model uses 32B-Qwen2.5-Kunou-v1 and Hamanasu-Magnum-QwQ-32B and Slerp to merge the 2 models.</p>
<h3>ChatML formatting</h3>
<pre>
"""<|im_start|>system
system prompt<|im_end|>
<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""
</pre>
<h3>MergeKit Configuration</h3>
<pre>
models:
- model: Sao10K/32B-Qwen2.5-Kunou-v1
- model: Delta-Vector/Hamanasu-Magnum-QwQ-32B
merge_method: slerp
base_model: Delta-Vector/Hamanasu-Magnum-QwQ-32B
parameters:
t:
- value: 0.2
dtype: bfloat16
tokenizer_source: base
</pre>
<h3>Quants:</h3>
<div class="quantso-container">
<div class="quantso-box">
<strong>GGUF</strong><br>
<a class="neon-link" href="#">https://huggingface.co/mradermacher/Archaeo-32B-GGUF/</a>
</div>
<div class="quantso-box">
<strong>EXL2</strong><br>
<a class="neon-link" href="#">https://huggingface.co/Delta-Vector/Archaeo-32B-EXL2/</a>
</div>
</div>
<h3>Credits</h3>
<p>Thank you to: Kubernetes-bad, LucyKnada, Intervitens, Samantha Twinkman, Tav, Trappu & The rest of Anthracite</p>
</div>
</div>
</div>
</body>
</html> |
Delta-Vector/Archaeo-32B | Delta-Vector | 2025-03-31T12:50:57Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"roleplay",
"creative-writing",
"merge",
"mergekit",
"conversational",
"base_model:Delta-Vector/Hamanasu-Magnum-QwQ-32B",
"base_model:merge:Delta-Vector/Hamanasu-Magnum-QwQ-32B",
"base_model:Sao10K/32B-Qwen2.5-Kunou-v1",
"base_model:merge:Sao10K/32B-Qwen2.5-Kunou-v1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-28T16:40:28Z | ---
tags:
- roleplay
- creative-writing
- merge
- mergekit
base_model:
- Delta-Vector/Hamanasu-Magnum-QwQ-32B
- Sao10K/32B-Qwen2.5-Kunou-v1
pipeline_tag: text-generation
library_name: transformers
---
```
__~a~_
~~; ~_
_ ~ ~_ _
'_\;__._._._._._._] ~_._._._._._.__;/_`
'(/'/'/'/'|'|'|'| ( )|'|'|'|'\'\'\'\)'
(/ / / /, | | | |(/ \) | | | ,\ \ \ \)
(/ / / / / | | | ~(/ \) ~ | | \ \ \ \ \)
(/ / / / / ~ ~ ~ (/ \) ~ ~ \ \ \ \ \)
(/ / / / ~ / (||)| ~ \ \ \ \)
~ / / ~ M /||\M ~ \ \ ~
~ ~ /||\ ~ ~
//||\\
//||\\
//||\\
'/||\' "Archaeopteryx"
```
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style>
@import url('https://fonts.googleapis.com/css2?family=VT323&display=swap');
body {
background: #0a0017;
margin: 0;
padding: 20px;
font-family: 'VT323', monospace;
color: #ff00aa;
text-shadow: 0 0 8px #ff00aa;
animation: glitch-flicker 0.2s infinite alternate;
}
@keyframes glitch-flicker {
0% { text-shadow: 0 0 5px #ff00aa, 0 0 15px #ff00aa; }
100% { text-shadow: 0 0 8px #ff0066, 0 0 18px #ff0066; }
}
.crt-container {
padding: 10px;
max-width: 900px;
margin: auto;
}
.crt-case {
background: linear-gradient(135deg, #130021, #20002c);
border-radius: 10px;
padding: 15px;
box-shadow:
inset 2px 2px 10px rgba(255,0,170,0.5),
2px 2px 5px rgba(255,0,170,0.3),
0 0 25px rgba(255,0,170,0.2);
}
.crt-screen {
background: #0c011a;
padding: 20px;
border-radius: 10px;
box-shadow:
inset 0 0 25px rgba(255,0,170,0.3),
0 0 15px rgba(255,0,170,0.7);
filter: contrast(1.2) brightness(1.2);
text-shadow: 0px 0px 5px #ff00aa;
animation: glow-pulse 3s infinite alternate;
}
@keyframes glow-pulse {
0% { box-shadow: inset 0 0 20px rgba(255,0,170,0.3), 0 0 15px rgba(255,0,170,0.3); }
100% { box-shadow: inset 0 0 30px rgba(255,0,170,0.5), 0 0 25px rgba(255,0,170,0.5); }
}
h2 {
color: #ff33cc;
text-align: center;
font-size: 28px;
text-shadow:
0 0 8px #ff33cc,
0 0 18px #ff0044;
}
pre {
background: rgba(255,0,170,0.1);
padding: 10px;
border-radius: 10px;
color: #ff66cc;
font-size: 14px;
box-shadow: inset 0 0 10px rgba(255,0,170,0.5);
}
.glitch {
animation: text-glitch 0.5s infinite alternate;
}
@keyframes text-glitch {
0% { transform: translateX(-2px); text-shadow: 0 0 5px #ff0066, 0 0 10px #ff33cc; }
100% { transform: translateX(2px); text-shadow: 0 0 8px #ff00aa, 0 0 20px #ff0099; }
}
.neon-link {
color: #ff66cc;
text-decoration: none;
transition: text-shadow 0.3s ease;
}
.neon-link:hover {
text-shadow: 0px 0px 15px #ff66cc, 0 0 25px rgba(255,0,170,0.5);
}
.ascii-art {
text-align: center;
font-size: 12px;
color: #ff33cc;
text-shadow: 0px 0px 5px #ff00ff;
margin-bottom: 20px;
}
.quantso-container {
display: flex;
justify-content: center;
gap: 20px;
margin-top: 20px;
}
.quantso-box {
background: rgba(255,0,170,0.1);
padding: 15px;
border-radius: 10px;
text-align: center;
box-shadow: inset 0 0 10px rgba(255,0,170,0.5);
flex: 1;
max-width: 150px;
}
</style>
</head>
<body>
<div class="crt-container">
<div class="crt-case">
<div class="crt-screen">
<p>A series of Merges made for Roleplaying & Creative Writing, This model uses 32B-Qwen2.5-Kunou-v1 and Hamanasu-Magnum-QwQ-32B and Slerp to merge the 2 models.</p>
<h3>ChatML formatting</h3>
<pre>
"""<|im_start|>system
system prompt<|im_end|>
<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""
</pre>
<h3>MergeKit Configuration</h3>
<pre>
models:
- model: Sao10K/32B-Qwen2.5-Kunou-v1
- model: Delta-Vector/Hamanasu-Magnum-QwQ-32B
merge_method: slerp
base_model: Delta-Vector/Hamanasu-Magnum-QwQ-32B
parameters:
t:
- value: 0.2
dtype: bfloat16
tokenizer_source: base
</pre>
<h3>Quants:</h3>
<div class="quantso-container">
<div class="quantso-box">
<strong>GGUF</strong><br>
<a class="neon-link" href="#">https://huggingface.co/mradermacher/Archaeo-32B-GGUF/</a>
</div>
<div class="quantso-box">
<strong>EXL2</strong><br>
<a class="neon-link" href="#">https://huggingface.co/Delta-Vector/Archaeo-32B-EXL2/</a>
</div>
</div>
<h3>Credits</h3>
<p>Thank you to: Kubernetes-bad, LucyKnada, Intervitens, Samantha Twinkman, Tav, Trappu & The rest of Anthracite</p>
</div>
</div>
</div>
</body>
</html> |
juhw/q427 | juhw | 2025-03-31T12:50:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T12:47:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ASethi04/llama-3.1-8b-arc-e-lora | ASethi04 | 2025-03-31T12:49:53Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"region:us"
]
| null | 2025-03-31T06:01:11Z | ---
base_model: meta-llama/Llama-3.1-8B
library_name: peft
license: llama3.1
metrics:
- accuracy
- precision
- recall
- f1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: llama-3.1-8b-arc-e-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-3.1-8b-arc-e-lora
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2417
- Accuracy: 0.9140
- Precision: 0.9143
- Recall: 0.9130
- F1: 0.9133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.1542 | 0.9996 | 1125 | 0.1994 | 0.9088 | 0.9084 | 0.9102 | 0.9087 |
| 0.063 | 2.0 | 2251 | 0.2417 | 0.9140 | 0.9143 | 0.9130 | 0.9133 |
| 0.0001 | 2.9996 | 3376 | 0.3695 | 0.9088 | 0.9076 | 0.9088 | 0.9079 |
| 0.0 | 3.9982 | 4500 | 0.4042 | 0.9070 | 0.9055 | 0.9072 | 0.9060 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 2.19.0
- Tokenizers 0.20.1 |
LiAutoAD/Ristretto-3B | LiAutoAD | 2025-03-31T12:49:32Z | 272 | 0 | transformers | [
"transformers",
"safetensors",
"ristretto",
"feature-extraction",
"image-text-to-text",
"conversational",
"custom_code",
"en",
"zh",
"dataset:lmms-lab/LLaVA-OneVision-Data",
"dataset:BAAI/Infinity-MM",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"region:us"
]
| image-text-to-text | 2025-03-26T08:26:51Z | ---
license: apache-2.0
datasets:
- lmms-lab/LLaVA-OneVision-Data
- BAAI/Infinity-MM
language:
- en
- zh
base_model:
- google/siglip2-so400m-patch14-384
- Qwen/Qwen2.5-3B-Instruct
pipeline_tag: image-text-to-text
library_name: transformers
---
## Introduction
We are excited to introduce **Ristretto**, our newest Vision language model (VLM) that represents a significant step forward in the field. Ristretto features a capability to deploy dynamic image tokens, enables flexible adjustment of image token quantities based on task requirements while enhancing the projector architecture to support dynamic token configurations. This new model delivers improved performance and versatility compared to its predecessors through its refined architecture and advanced training approach.
**Key Innovations**
Coming soon...
### Environment Setup
```bash
pip install torch>=2.3.0
pip install transformers==4.37.0
```
### How to use?
```python
import torch
import torchvision.transforms as T
from PIL import Image
from torchvision.transforms.functional import InterpolationMode
from transformers import AutoModel, AutoTokenizer
import requests
from io import BytesIO
IMAGENET_MEAN = (0.5, 0.5, 0.5)
IMAGENET_STD = (0.5, 0.5, 0.5)
def build_transform(input_size):
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
best_ratio_diff = float('inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
if ratio_diff < best_ratio_diff:
best_ratio_diff = ratio_diff
best_ratio = ratio
elif ratio_diff == best_ratio_diff:
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
best_ratio = ratio
return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=10, image_size=448, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
# calculate the existing image aspect ratio
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
# find the closest aspect ratio to the target
target_aspect_ratio = find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
# calculate the target width and height
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
# resize the image
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
box = (
(i % (target_width // image_size)) * image_size,
(i // (target_width // image_size)) * image_size,
((i % (target_width // image_size)) + 1) * image_size,
((i // (target_width // image_size)) + 1) * image_size
)
# split the image
split_img = resized_img.crop(box)
processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
thumbnail_img = image.resize((image_size, image_size))
processed_images.append(thumbnail_img)
return processed_images
def load_image(image_data, input_size=384, max_num=10):
image = Image.open(image_data).convert('RGB')
transform = build_transform(input_size=input_size)
images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(image) for image in images]
pixel_values = torch.stack(pixel_values)
return pixel_values
model_path = 'LiAutoAD/Ristretto-3B'
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
trust_remote_code=True).eval().cuda()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
image_url = 'https://github.com/user-attachments/assets/83258e94-5d61-48ef-a87f-80dd9d895524'
response = requests.get(image_url)
image_data = BytesIO(response.content)
pixel_values = load_image(image_data, max_num=10).to(torch.bfloat16).cuda()
generation_config = dict(max_new_tokens=1024, do_sample=True)
# The recommended range for `num_image_token` is 64 to 576, and the value can be adjusted based on task requirements.
num_image_token = 256
# pure-text conversation
question = 'Hello, who are you?'
response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True)
print(f'User: {question} Assistant: {response}')
# text-image conversation && multi-round conversation
question = '<image> Please describe the image.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
print(f'User: {question} Assistant: {response}')
question = 'What is best title for the image?'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
print(f'User: {question} Assistant: {response}')
```
### Evaluation
| Benchmark | Qwen2.5-VL-3B | InternVL2.5-4B | Ristretto-3B |
| :-------: | :----------: | :-------------: | :----: |
| MMBench-TEST-avg | 76.8 | 78.2 | 80.1 |
| MMStar | 56.3 | 58.7 | 62.6 |
| MMMU-VAL | 51.2 | 51.8 | 49.1 |
| MathVista-MINI-test | 61.2 | 60.8 | 67.9 |
| HallucinationBench | 46.6 | 46.6 | 50.2 |
| AI2D | 81.4 | 81.4 | 84.3 |
| OCRBench | 82.8 | 82.0 | 84.0 |
| MMVet | 60.0 | 61.5 | 61.8 |
| Average | 64.5 | 65.1 | 67.6 |
We use [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) to evaluate Ristretto-3B. Other results are taken from [OpenCompass](https://rank.opencompass.org.cn/leaderboard-multimodal)
## License Agreement
All of our open-source models are licensed under the Apache-2.0 license.
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> |
Efficient-Large-Model/Sana_Sprint_1.6B_1024px_teacher | Efficient-Large-Model | 2025-03-31T12:49:25Z | 16 | 0 | sana, sana-sprint | [
"sana, sana-sprint",
"text-to-image",
"SANA-Sprint",
"1024px_based_image_size",
"BF16",
"One-step diffusion",
"en",
"zh",
"arxiv:2503.09641",
"base_model:Efficient-Large-Model/Sana_Sprint_1.6B_1024px_teacher",
"base_model:finetune:Efficient-Large-Model/Sana_Sprint_1.6B_1024px_teacher",
"region:us"
]
| text-to-image | 2025-03-19T07:30:43Z | ---
library_name: sana, sana-sprint
tags:
- text-to-image
- SANA-Sprint
- 1024px_based_image_size
- BF16
- One-step diffusion
language:
- en
- zh
base_model:
- Efficient-Large-Model/Sana_Sprint_1.6B_1024px_teacher
pipeline_tag: text-to-image
---
<p align="center" style="border-radius: 10px">
<img src="https://nvlabs.github.io/Sana/Sprint/asset/SANA-Sprint.png" width="50%" alt="logo"/>
</p>
<div style="display:flex;justify-content: center">
<a href="https://huggingface.co/collections/Efficient-Large-Model/sana-sprint-67d6810d65235085b3b17c76"><img src="https://img.shields.io/static/v1?label=Weights&message=Huggingface&color=yellow"></a>  
<a href="https://github.com/NVlabs/Sana"><img src="https://img.shields.io/static/v1?label=Code&message=Github&color=blue&logo=github"></a>  
<a href="https://nvlabs.github.io/Sana/Sprint/"><img src="https://img.shields.io/static/v1?label=Project&message=Github&color=blue&logo=github-pages"></a>  
<!-- <a href="https://hanlab.mit.edu/projects/sana/"><img src="https://img.shields.io/static/v1?label=Page&message=MIT&color=darkred&logo=github-pages"></a>   -->
<a href="https://arxiv.org/pdf/2503.09641"><img src="https://img.shields.io/static/v1?label=Arxiv&message=SANA-Sprint&color=red&logo=arxiv"></a>  
<a href="https://nv-sana.mit.edu/sprint"><img src="https://img.shields.io/static/v1?label=Demo&message=MIT&color=yellow"></a>  
<a href="https://discord.gg/rde6eaE5Ta"><img src="https://img.shields.io/static/v1?label=Discuss&message=Discord&color=purple&logo=discord"></a>  
</div>
# 🐱 Sana Model Card
This model serves as the **Teacher** in the figure below. It's not a few-step generative model but a fine-tuned diffusion model with
(1) **Dense Timestep Embedding** and (2) **QK Normalization** discussed in the [SANA-Sprint paper](https://arxiv.org/pdf/2503.09641).
Few-step generative models can be found in [HF repo](https://huggingface.co/collections/Efficient-Large-Model/sana-sprint-67d6810d65235085b3b17c76).
Source code is available at https://github.com/NVlabs/Sana.
## Training Pipeline
<p align="center" border-raduis="10px">
<img src="https://nvlabs.github.io/Sana/Sprint/asset/content/paradigm.png" width="85%" alt="teaser_page1"/>
</p>
### Model Description
- **Developed by:** NVIDIA, Sana
- **Model type:** Teacher model for One-Step Diffusion with Continuous-Time Consistency Distillation
- **Model size:** 1.6B parameters
- **Model precision:** torch.bfloat16 (BF16)
- **Model resolution:** This model is developed to generate 1024px based images with multi-scale heigh and width.
- **License:** [NSCL v2-custom](./LICENSE.txt). Governing Terms: NVIDIA License. Additional Information: [Gemma Terms of Use | Google AI for Developers](https://ai.google.dev/gemma/terms) for Gemma-2-2B-IT, [Gemma Prohibited Use Policy | Google AI for Developers](https://ai.google.dev/gemma/prohibited_use_policy).
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts.
It is a Linear Diffusion Transformer that uses one fixed, pretrained text encoders ([Gemma2-2B-IT](https://huggingface.co/google/gemma-2-2b-it))
and one 32x spatial-compressed latent feature encoder ([DC-AE](https://hanlab.mit.edu/projects/dc-ae)).
- **Resources for more information:** Check out our [GitHub Repository](https://github.com/NVlabs/Sana) and the [SANA-Sprint report on arXiv](https://arxiv.org/pdf/2503.09641).
### Model Sources
For research purposes, we recommend our `generative-models` Github repository (https://github.com/NVlabs/Sana), which is more suitable for both training and inference
[MIT Han-Lab](https://nv-sana.mit.edu/sprint) provides free SANA-Sprint inference.
- **Repository:** https://github.com/NVlabs/Sana
- **Demo:** https://nv-sana.mit.edu/sprint
## Uses
### Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
Excluded uses are described below.
### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render complex legible text
- fingers, .etc in general may not be generated properly.
- The autoencoding part of the model is lossy.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. |
Efficient-Large-Model/Sana_Sprint_0.6B_1024px_teacher | Efficient-Large-Model | 2025-03-31T12:49:13Z | 0 | 0 | sana, sana-sprint | [
"sana, sana-sprint",
"text-to-image",
"SANA-Sprint",
"1024px_based_image_size",
"BF16",
"One-step diffusion",
"en",
"zh",
"arxiv:2503.09641",
"base_model:Efficient-Large-Model/Sana_Sprint_0.6B_1024px_teacher",
"base_model:finetune:Efficient-Large-Model/Sana_Sprint_0.6B_1024px_teacher",
"region:us"
]
| text-to-image | 2025-03-31T12:40:28Z | ---
library_name: sana, sana-sprint
tags:
- text-to-image
- SANA-Sprint
- 1024px_based_image_size
- BF16
- One-step diffusion
language:
- en
- zh
base_model:
- Efficient-Large-Model/Sana_Sprint_0.6B_1024px_teacher
pipeline_tag: text-to-image
---
<p align="center" style="border-radius: 10px">
<img src="https://nvlabs.github.io/Sana/Sprint/asset/SANA-Sprint.png" width="50%" alt="logo"/>
</p>
<div style="display:flex;justify-content: center">
<a href="https://huggingface.co/collections/Efficient-Large-Model/sana-sprint-67d6810d65235085b3b17c76"><img src="https://img.shields.io/static/v1?label=Weights&message=Huggingface&color=yellow"></a>  
<a href="https://github.com/NVlabs/Sana"><img src="https://img.shields.io/static/v1?label=Code&message=Github&color=blue&logo=github"></a>  
<a href="https://nvlabs.github.io/Sana/Sprint/"><img src="https://img.shields.io/static/v1?label=Project&message=Github&color=blue&logo=github-pages"></a>  
<!-- <a href="https://hanlab.mit.edu/projects/sana/"><img src="https://img.shields.io/static/v1?label=Page&message=MIT&color=darkred&logo=github-pages"></a>   -->
<a href="https://arxiv.org/pdf/2503.09641"><img src="https://img.shields.io/static/v1?label=Arxiv&message=SANA-Sprint&color=red&logo=arxiv"></a>  
<a href="https://nv-sana.mit.edu/sprint"><img src="https://img.shields.io/static/v1?label=Demo&message=MIT&color=yellow"></a>  
<a href="https://discord.gg/rde6eaE5Ta"><img src="https://img.shields.io/static/v1?label=Discuss&message=Discord&color=purple&logo=discord"></a>  
</div>
# 🐱 Sana Model Card
This model serves as the **Teacher** in the figure below. It's not a few-step generative model but a fine-tuned diffusion model with
(1) **Dense Timestep Embedding** and (2) **QK Normalization** discussed in the [SANA-Sprint paper](https://arxiv.org/pdf/2503.09641).
Few-step generative models can be found in [HF repo](https://huggingface.co/collections/Efficient-Large-Model/sana-sprint-67d6810d65235085b3b17c76).
Source code is available at https://github.com/NVlabs/Sana.
## Training Pipeline
<p align="center" border-raduis="10px">
<img src="https://nvlabs.github.io/Sana/Sprint/asset/content/paradigm.png" width="85%" alt="teaser_page1"/>
</p>
### Model Description
- **Developed by:** NVIDIA, Sana
- **Model type:** Teacher model for One-Step Diffusion with Continuous-Time Consistency Distillation
- **Model size:** 0.6B parameters
- **Model precision:** torch.bfloat16 (BF16)
- **Model resolution:** This model is developed to generate 1024px based images with multi-scale heigh and width.
- **License:** [NSCL v2-custom](./LICENSE.txt). Governing Terms: NVIDIA License. Additional Information: [Gemma Terms of Use | Google AI for Developers](https://ai.google.dev/gemma/terms) for Gemma-2-2B-IT, [Gemma Prohibited Use Policy | Google AI for Developers](https://ai.google.dev/gemma/prohibited_use_policy).
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts.
It is a Linear Diffusion Transformer that uses one fixed, pretrained text encoders ([Gemma2-2B-IT](https://huggingface.co/google/gemma-2-2b-it))
and one 32x spatial-compressed latent feature encoder ([DC-AE](https://hanlab.mit.edu/projects/dc-ae)).
- **Resources for more information:** Check out our [GitHub Repository](https://github.com/NVlabs/Sana) and the [SANA-Sprint report on arXiv](https://arxiv.org/pdf/2503.09641).
### Model Sources
For research purposes, we recommend our `generative-models` Github repository (https://github.com/NVlabs/Sana), which is more suitable for both training and inference
[MIT Han-Lab](https://nv-sana.mit.edu/sprint) provides free SANA-Sprint inference.
- **Repository:** https://github.com/NVlabs/Sana
- **Demo:** https://nv-sana.mit.edu/sprint
## Uses
### Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
Excluded uses are described below.
### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render complex legible text
- fingers, .etc in general may not be generated properly.
- The autoencoding part of the model is lossy.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. |
tomoe007/non | tomoe007 | 2025-03-31T12:48:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T12:42:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jonjew/ViessaUltimateAbsoluteZeroTheFirstDescendant | Jonjew | 2025-03-31T12:48:23Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
]
| text-to-image | 2025-03-31T12:48:10Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
semi side-view cinematic film still of TFD-Viessa-ULT-AbsoluteZero, a
futuristic, armored female warrior sorceress is posing elegantly in a snowy
winter scene with lots of snowfall. She is casting blue magic with her hands
and magic is swirling all around her. The background has a snowy mountain
view. She is casting magic.
output:
url: images/00127-2174865257.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: TFD-Viessa-ULT-AbsoluteZero
license: unknown
---
# Viessa Ultimate - Absolute Zero - The First Descendant
<Gallery />
## Model description
FROM https://civitai.com/models/769178/viessa-ultimate-absolute-zero-the-first-descendant-flux-lora
Trigger TFD-Viessa-ULT-AbsoluteZero
Strength 1
A (FLUX) Character LoRA for Viessa Ultimate (with the Absolute Zero -skin)
from The First Descendant -videogame. Also check out my Hailey and Sharen FLUX LoRAs.
Triggerword: TFD-Viessa-ULT-AbsoluteZero
Suggested Weight: 1
My Preview Images Generated on:
-flux1-dev-Q8_0.gguf + t5xxl_fp16 (ForgeUI)
-Euler, Simple
-960x1728 (or 1024x1600)+ 1.25x Hires. Fix (4x-UltraSharp -upscaler)
-Distilled CFG Scale: 3.5
-Only This LoRA enabled
Add the following to your prompt to help you get the character:
TFD-Viessa-ULT-AbsoluteZero, a futuristic, armored female warrior sorceress
The character has a sleek, athletic build with a toned physique. She wears a mask with a sleek, futuristic design that covers most of her face, leaving only her mouth and nose visible. The mask is predominantly white with gold and blue accents. She has long, wavy silver hair styled into two braids that cascade over her shoulders.
Her armor features intricate, geometric patterns and a high-tech, glossy texture. The armor covers her entire body with additional decorative pieces on her shoulders. The character's attire is form-fitting, emphasizing her curvaceous physique. She also has a waist cape.
## Trigger words
You should use `TFD-Viessa-ULT-AbsoluteZero` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/ViessaUltimateAbsoluteZeroTheFirstDescendant/tree/main) them in the Files & versions tab.
|
silviasapora/gemma-7b-silvia_cpo-basic_capibara-5e-5-025-v150 | silviasapora | 2025-03-31T12:47:20Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"orpo",
"conversational",
"dataset:argilla/distilabel-capybara-dpo-7k-binarized",
"arxiv:2403.07691",
"base_model:google/gemma-7b",
"base_model:finetune:google/gemma-7b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T11:13:14Z | ---
base_model: google/gemma-7b
datasets:
- argilla/distilabel-capybara-dpo-7k-binarized
library_name: transformers
model_name: google/gemma-7b
tags:
- generated_from_trainer
- alignment-handbook
- trl
- orpo
licence: license
---
# Model Card for google/gemma-7b
This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the [['argilla/distilabel-capybara-dpo-7k-binarized']](https://huggingface.co/datasets/['argilla/distilabel-capybara-dpo-7k-binarized']) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="silviasapora/gemma-7b-silvia_cpo-basic_capibara-5e-5-025-v150", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/silvias/huggingface/runs/wyrt8myx)
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.3
- Pytorch: 2.5.1
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
rounakiitkgp/safety-gen-ai-gemma-3-1b-tilde | rounakiitkgp | 2025-03-31T12:46:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T12:45:23Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Binay123456/Kolkata_Housing_Price_Prediction | Binay123456 | 2025-03-31T12:46:22Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-03-31T12:46:19Z | ---
license: apache-2.0
---
|
lesso12/5f00c51c-8e96-4eff-a198-6f8967cec42b | lesso12 | 2025-03-31T12:42:36Z | 0 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
]
| null | 2025-03-31T11:32:49Z | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3-mini-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5f00c51c-8e96-4eff-a198-6f8967cec42b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3-mini-4k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5a00b31a0cadf31e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5a00b31a0cadf31e_train_data.json
type:
field_input: system_prompt
field_instruction: problem
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso12/5f00c51c-8e96-4eff-a198-6f8967cec42b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000212
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/5a00b31a0cadf31e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 120
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f813fc57-085f-4cb4-9c17-be1bd72df1d1
wandb_project: 12a
wandb_run: your_name
wandb_runid: f813fc57-085f-4cb4-9c17-be1bd72df1d1
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5f00c51c-8e96-4eff-a198-6f8967cec42b
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000212
- train_batch_size: 4
- eval_batch_size: 4
- seed: 120
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0007 | 1 | 0.8650 |
| 3.712 | 0.3382 | 500 | 0.4632 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
bowilleatyou/32f3bd01-82d3-4969-a5bb-b189aedc7611 | bowilleatyou | 2025-03-31T12:42:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T11:43:11Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hhhhhhh1014/lora_model | hhhhhhh1014 | 2025-03-31T12:41:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"en",
"base_model:unsloth/gemma-2-9b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-9b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-17T07:37:29Z | ---
base_model: unsloth/gemma-2-9b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hhhhhhh1014
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit
# maoniangmoxing
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
davidcheon/Qwen2.5-VL-3B-Instruct-Q2_K-GGUF | davidcheon | 2025-03-31T12:41:38Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"multimodal",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"en",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-VL-3B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
]
| image-text-to-text | 2025-03-31T12:41:29Z | ---
base_model: Qwen/Qwen2.5-VL-3B-Instruct
language:
- en
library_name: transformers
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct/blob/main/LICENSE
pipeline_tag: image-text-to-text
tags:
- multimodal
- llama-cpp
- gguf-my-repo
---
# davidcheon/Qwen2.5-VL-3B-Instruct-Q2_K-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-VL-3B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo davidcheon/Qwen2.5-VL-3B-Instruct-Q2_K-GGUF --hf-file qwen2.5-vl-3b-instruct-q2_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo davidcheon/Qwen2.5-VL-3B-Instruct-Q2_K-GGUF --hf-file qwen2.5-vl-3b-instruct-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo davidcheon/Qwen2.5-VL-3B-Instruct-Q2_K-GGUF --hf-file qwen2.5-vl-3b-instruct-q2_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo davidcheon/Qwen2.5-VL-3B-Instruct-Q2_K-GGUF --hf-file qwen2.5-vl-3b-instruct-q2_k.gguf -c 2048
```
|
RJTPP/stage2-deepseek1.5b-3k | RJTPP | 2025-03-31T12:41:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/DeepSeek-R1-Distill-Qwen-1.5B-bnb-4bit",
"base_model:finetune:unsloth/DeepSeek-R1-Distill-Qwen-1.5B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T11:20:11Z | ---
base_model: unsloth/DeepSeek-R1-Distill-Qwen-1.5B-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** RJTPP
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Qwen-1.5B-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
juhw/q426 | juhw | 2025-03-31T12:40:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T12:38:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sxsun1684/lora-llama2-backward | sxsun1684 | 2025-03-31T12:40:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T12:40:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ayushexel/reranker-MiniLM-L6-H384-uncased-gooaq-1-epoch-1995000 | ayushexel | 2025-03-31T12:39:22Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"cross-encoder",
"generated_from_trainer",
"dataset_size:11456702",
"loss:BinaryCrossEntropyLoss",
"text-ranking",
"en",
"arxiv:1908.10084",
"base_model:nreimers/MiniLM-L6-H384-uncased",
"base_model:finetune:nreimers/MiniLM-L6-H384-uncased",
"license:apache-2.0",
"model-index",
"region:us"
]
| text-ranking | 2025-03-31T12:39:16Z | ---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- cross-encoder
- generated_from_trainer
- dataset_size:11456702
- loss:BinaryCrossEntropyLoss
base_model: nreimers/MiniLM-L6-H384-uncased
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- map
- mrr@10
- ndcg@10
model-index:
- name: CrossEncoder based on nreimers/MiniLM-L6-H384-uncased
results:
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: gooaq dev
type: gooaq-dev
metrics:
- type: map
value: 0.4404
name: Map
- type: mrr@10
value: 0.439
name: Mrr@10
- type: ndcg@10
value: 0.4867
name: Ndcg@10
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: NanoMSMARCO R100
type: NanoMSMARCO_R100
metrics:
- type: map
value: 0.3958
name: Map
- type: mrr@10
value: 0.3805
name: Mrr@10
- type: ndcg@10
value: 0.4669
name: Ndcg@10
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: NanoNFCorpus R100
type: NanoNFCorpus_R100
metrics:
- type: map
value: 0.3521
name: Map
- type: mrr@10
value: 0.5816
name: Mrr@10
- type: ndcg@10
value: 0.376
name: Ndcg@10
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: NanoNQ R100
type: NanoNQ_R100
metrics:
- type: map
value: 0.3832
name: Map
- type: mrr@10
value: 0.382
name: Mrr@10
- type: ndcg@10
value: 0.436
name: Ndcg@10
- task:
type: cross-encoder-nano-beir
name: Cross Encoder Nano BEIR
dataset:
name: NanoBEIR R100 mean
type: NanoBEIR_R100_mean
metrics:
- type: map
value: 0.3771
name: Map
- type: mrr@10
value: 0.448
name: Mrr@10
- type: ndcg@10
value: 0.4263
name: Ndcg@10
---
# CrossEncoder based on nreimers/MiniLM-L6-H384-uncased
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [nreimers/MiniLM-L6-H384-uncased](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
## Model Details
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [nreimers/MiniLM-L6-H384-uncased](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) <!-- at revision 3276f0fac9d818781d7a1327b3ff818fc4e643c0 -->
- **Maximum Sequence Length:** 512 tokens
- **Number of Output Labels:** 1 label
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("ayushexel/reranker-MiniLM-L6-H384-uncased-gooaq-1-epoch-1995000")
# Get scores for pairs of texts
pairs = [
['how much does it cost to get married in paris?', 'While the average cost of a wedding in the United States is 26,000 USD excluding the honeymoon, elopements in Paris might cost as low as 10,000 to 20,000 USD including the ceremony, reception, hotel, meals and a honeymoon.'],
['how much does it cost to get married in paris?', 'According to an internal study, the average destination wedding cost is $32,000 (excluding the cost of the engagement ring). Including the price of the engagement ring, the average destination wedding cost is $38,500.'],
['how much does it cost to get married in paris?', 'In France only civil marriages are legally binding, thus lot of couples have two ceremonies. The civil ceremony including notary fees costs you 350-400 euros on average in France. The religious ceremony costs around 200-300 euros.'],
['how much does it cost to get married in paris?', "The average cost of a wedding in 2019 was $33,900 (including the engagement ring, ceremony and reception), according to The Knot's 2019 Real Weddings Study. Here's what you should know about wedding costs and how to realistically estimate how much you'll spend to take the plunge."],
['how much does it cost to get married in paris?', 'You can typically rent wedding dresses for as little as about $50-$600, but they can also cost much less or more depending on the dress and rental company. On the more expensive end, designer gowns rent for a fraction of their purchase price, anywhere from $500 to $2,000 is common.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'how much does it cost to get married in paris?',
[
'While the average cost of a wedding in the United States is 26,000 USD excluding the honeymoon, elopements in Paris might cost as low as 10,000 to 20,000 USD including the ceremony, reception, hotel, meals and a honeymoon.',
'According to an internal study, the average destination wedding cost is $32,000 (excluding the cost of the engagement ring). Including the price of the engagement ring, the average destination wedding cost is $38,500.',
'In France only civil marriages are legally binding, thus lot of couples have two ceremonies. The civil ceremony including notary fees costs you 350-400 euros on average in France. The religious ceremony costs around 200-300 euros.',
"The average cost of a wedding in 2019 was $33,900 (including the engagement ring, ceremony and reception), according to The Knot's 2019 Real Weddings Study. Here's what you should know about wedding costs and how to realistically estimate how much you'll spend to take the plunge.",
'You can typically rent wedding dresses for as little as about $50-$600, but they can also cost much less or more depending on the dress and rental company. On the more expensive end, designer gowns rent for a fraction of their purchase price, anywhere from $500 to $2,000 is common.',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Reranking
* Dataset: `gooaq-dev`
* Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
```json
{
"at_k": 10,
"always_rerank_positives": false
}
```
| Metric | Value |
|:------------|:---------------------|
| map | 0.4404 (+0.1707) |
| mrr@10 | 0.4390 (+0.1801) |
| **ndcg@10** | **0.4867 (+0.1771)** |
#### Cross Encoder Reranking
* Datasets: `NanoMSMARCO_R100`, `NanoNFCorpus_R100` and `NanoNQ_R100`
* Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
```json
{
"at_k": 10,
"always_rerank_positives": true
}
```
| Metric | NanoMSMARCO_R100 | NanoNFCorpus_R100 | NanoNQ_R100 |
|:------------|:---------------------|:---------------------|:---------------------|
| map | 0.3958 (-0.0937) | 0.3521 (+0.0911) | 0.3832 (-0.0364) |
| mrr@10 | 0.3805 (-0.0970) | 0.5816 (+0.0817) | 0.3820 (-0.0447) |
| **ndcg@10** | **0.4669 (-0.0735)** | **0.3760 (+0.0510)** | **0.4360 (-0.0646)** |
#### Cross Encoder Nano BEIR
* Dataset: `NanoBEIR_R100_mean`
* Evaluated with [<code>CrossEncoderNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco",
"nfcorpus",
"nq"
],
"rerank_k": 100,
"at_k": 10,
"always_rerank_positives": true
}
```
| Metric | Value |
|:------------|:---------------------|
| map | 0.3771 (-0.0130) |
| mrr@10 | 0.4480 (-0.0200) |
| **ndcg@10** | **0.4263 (-0.0291)** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 11,456,702 training samples
* Columns: <code>question</code>, <code>answer</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | question | answer | label |
|:--------|:-----------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 18 characters</li><li>mean: 42.92 characters</li><li>max: 88 characters</li></ul> | <ul><li>min: 55 characters</li><li>mean: 252.31 characters</li><li>max: 383 characters</li></ul> | <ul><li>0: ~82.50%</li><li>1: ~17.50%</li></ul> |
* Samples:
| question | answer | label |
|:------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>how much does it cost to get married in paris?</code> | <code>While the average cost of a wedding in the United States is 26,000 USD excluding the honeymoon, elopements in Paris might cost as low as 10,000 to 20,000 USD including the ceremony, reception, hotel, meals and a honeymoon.</code> | <code>1</code> |
| <code>how much does it cost to get married in paris?</code> | <code>According to an internal study, the average destination wedding cost is $32,000 (excluding the cost of the engagement ring). Including the price of the engagement ring, the average destination wedding cost is $38,500.</code> | <code>0</code> |
| <code>how much does it cost to get married in paris?</code> | <code>In France only civil marriages are legally binding, thus lot of couples have two ceremonies. The civil ceremony including notary fees costs you 350-400 euros on average in France. The religious ceremony costs around 200-300 euros.</code> | <code>0</code> |
* Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
```json
{
"activation_fn": "torch.nn.modules.linear.Identity",
"pos_weight": 5
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `seed`: 12
- `bf16`: True
- `dataloader_num_workers`: 12
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 12
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 12
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | gooaq-dev_ndcg@10 | NanoMSMARCO_R100_ndcg@10 | NanoNFCorpus_R100_ndcg@10 | NanoNQ_R100_ndcg@10 | NanoBEIR_R100_mean_ndcg@10 |
|:------:|:-----:|:-------------:|:-----------------:|:------------------------:|:-------------------------:|:-------------------:|:--------------------------:|
| -1 | -1 | - | 0.0959 (-0.2136) | 0.0324 (-0.5081) | 0.2360 (-0.0890) | 0.0375 (-0.4632) | 0.1019 (-0.3534) |
| 0.0000 | 1 | 1.1913 | - | - | - | - | - |
| 0.0045 | 200 | 1.1796 | - | - | - | - | - |
| 0.0089 | 400 | 1.1778 | - | - | - | - | - |
| 0.0134 | 600 | 1.1696 | - | - | - | - | - |
| 0.0179 | 800 | 1.1659 | - | - | - | - | - |
| 0.0223 | 1000 | 1.1732 | - | - | - | - | - |
| 0.0268 | 1200 | 1.1115 | - | - | - | - | - |
| 0.0313 | 1400 | 1.0091 | - | - | - | - | - |
| 0.0358 | 1600 | 0.9436 | - | - | - | - | - |
| 0.0402 | 1800 | 0.9239 | - | - | - | - | - |
| 0.0447 | 2000 | 0.8863 | - | - | - | - | - |
| 0.0492 | 2200 | 0.8617 | - | - | - | - | - |
| 0.0536 | 2400 | 0.8448 | - | - | - | - | - |
| 0.0581 | 2600 | 0.8301 | - | - | - | - | - |
| 0.0626 | 2800 | 0.821 | - | - | - | - | - |
| 0.0670 | 3000 | 0.8049 | - | - | - | - | - |
| 0.0715 | 3200 | 0.7858 | - | - | - | - | - |
| 0.0760 | 3400 | 0.7732 | - | - | - | - | - |
| 0.0804 | 3600 | 0.7654 | - | - | - | - | - |
| 0.0849 | 3800 | 0.7495 | - | - | - | - | - |
| 0.0894 | 4000 | 0.7362 | - | - | - | - | - |
| 0.0938 | 4200 | 0.7264 | - | - | - | - | - |
| 0.0983 | 4400 | 0.7183 | - | - | - | - | - |
| 0.1028 | 4600 | 0.712 | - | - | - | - | - |
| 0.1073 | 4800 | 0.7048 | - | - | - | - | - |
| 0.1117 | 5000 | 0.7068 | - | - | - | - | - |
| 0.1162 | 5200 | 0.7083 | - | - | - | - | - |
| 0.1207 | 5400 | 0.6894 | - | - | - | - | - |
| 0.1251 | 5600 | 0.6852 | - | - | - | - | - |
| 0.1296 | 5800 | 0.6717 | - | - | - | - | - |
| 0.1341 | 6000 | 0.6814 | - | - | - | - | - |
| 0.1385 | 6200 | 0.6713 | - | - | - | - | - |
| 0.1430 | 6400 | 0.6637 | - | - | - | - | - |
| 0.1475 | 6600 | 0.6604 | - | - | - | - | - |
| 0.1519 | 6800 | 0.6576 | - | - | - | - | - |
| 0.1564 | 7000 | 0.6565 | - | - | - | - | - |
| 0.1609 | 7200 | 0.6535 | - | - | - | - | - |
| 0.1654 | 7400 | 0.6477 | - | - | - | - | - |
| 0.1698 | 7600 | 0.6467 | - | - | - | - | - |
| 0.1743 | 7800 | 0.6329 | - | - | - | - | - |
| 0.1788 | 8000 | 0.6372 | - | - | - | - | - |
| 0.1832 | 8200 | 0.6422 | - | - | - | - | - |
| 0.1877 | 8400 | 0.635 | - | - | - | - | - |
| 0.1922 | 8600 | 0.6344 | - | - | - | - | - |
| 0.1966 | 8800 | 0.6299 | - | - | - | - | - |
| 0.2011 | 9000 | 0.6293 | - | - | - | - | - |
| 0.2056 | 9200 | 0.6257 | - | - | - | - | - |
| 0.2100 | 9400 | 0.612 | - | - | - | - | - |
| 0.2145 | 9600 | 0.6215 | - | - | - | - | - |
| 0.2190 | 9800 | 0.6195 | - | - | - | - | - |
| 0.2234 | 10000 | 0.6133 | - | - | - | - | - |
| 0.2279 | 10200 | 0.6069 | - | - | - | - | - |
| 0.2324 | 10400 | 0.6013 | - | - | - | - | - |
| 0.2369 | 10600 | 0.6141 | - | - | - | - | - |
| 0.2413 | 10800 | 0.5997 | - | - | - | - | - |
| 0.2458 | 11000 | 0.6027 | - | - | - | - | - |
| 0.2503 | 11200 | 0.5993 | - | - | - | - | - |
| 0.2547 | 11400 | 0.5954 | - | - | - | - | - |
| 0.2592 | 11600 | 0.5948 | - | - | - | - | - |
| 0.2637 | 11800 | 0.5933 | - | - | - | - | - |
| 0.2681 | 12000 | 0.5893 | - | - | - | - | - |
| 0.2726 | 12200 | 0.5864 | - | - | - | - | - |
| 0.2771 | 12400 | 0.5884 | - | - | - | - | - |
| 0.2815 | 12600 | 0.5866 | - | - | - | - | - |
| 0.2860 | 12800 | 0.5898 | - | - | - | - | - |
| 0.2905 | 13000 | 0.5843 | - | - | - | - | - |
| 0.2950 | 13200 | 0.5926 | - | - | - | - | - |
| 0.2994 | 13400 | 0.5913 | - | - | - | - | - |
| 0.3039 | 13600 | 0.5768 | - | - | - | - | - |
| 0.3084 | 13800 | 0.5781 | - | - | - | - | - |
| 0.3128 | 14000 | 0.5805 | - | - | - | - | - |
| 0.3173 | 14200 | 0.5835 | - | - | - | - | - |
| 0.3218 | 14400 | 0.5736 | - | - | - | - | - |
| 0.3262 | 14600 | 0.5789 | - | - | - | - | - |
| 0.3307 | 14800 | 0.5789 | - | - | - | - | - |
| 0.3352 | 15000 | 0.5772 | - | - | - | - | - |
| 0.3396 | 15200 | 0.5668 | - | - | - | - | - |
| 0.3441 | 15400 | 0.5751 | - | - | - | - | - |
| 0.3486 | 15600 | 0.5643 | - | - | - | - | - |
| 0.3530 | 15800 | 0.575 | - | - | - | - | - |
| 0.3575 | 16000 | 0.5675 | - | - | - | - | - |
| 0.3620 | 16200 | 0.567 | - | - | - | - | - |
| 0.3665 | 16400 | 0.5583 | - | - | - | - | - |
| 0.3709 | 16600 | 0.562 | - | - | - | - | - |
| 0.3754 | 16800 | 0.5661 | - | - | - | - | - |
| 0.3799 | 17000 | 0.5489 | - | - | - | - | - |
| 0.3843 | 17200 | 0.5545 | - | - | - | - | - |
| 0.3888 | 17400 | 0.5549 | - | - | - | - | - |
| 0.3933 | 17600 | 0.5545 | - | - | - | - | - |
| 0.3977 | 17800 | 0.562 | - | - | - | - | - |
| 0.4022 | 18000 | 0.5635 | - | - | - | - | - |
| 0.4067 | 18200 | 0.549 | - | - | - | - | - |
| 0.4111 | 18400 | 0.5591 | - | - | - | - | - |
| 0.4156 | 18600 | 0.5574 | - | - | - | - | - |
| 0.4201 | 18800 | 0.5506 | - | - | - | - | - |
| 0.4246 | 19000 | 0.5556 | - | - | - | - | - |
| 0.4290 | 19200 | 0.5606 | - | - | - | - | - |
| 0.4335 | 19400 | 0.5523 | - | - | - | - | - |
| 0.4380 | 19600 | 0.5566 | - | - | - | - | - |
| 0.4424 | 19800 | 0.5417 | - | - | - | - | - |
| 0.4469 | 20000 | 0.5493 | - | - | - | - | - |
| 0.4514 | 20200 | 0.5443 | - | - | - | - | - |
| 0.4558 | 20400 | 0.5511 | - | - | - | - | - |
| 0.4603 | 20600 | 0.5458 | - | - | - | - | - |
| 0.4648 | 20800 | 0.5495 | - | - | - | - | - |
| 0.4692 | 21000 | 0.5478 | - | - | - | - | - |
| 0.4737 | 21200 | 0.5466 | - | - | - | - | - |
| 0.4782 | 21400 | 0.5416 | - | - | - | - | - |
| 0.4826 | 21600 | 0.5422 | - | - | - | - | - |
| 0.4871 | 21800 | 0.5412 | - | - | - | - | - |
| 0.4916 | 22000 | 0.5457 | - | - | - | - | - |
| 0.4961 | 22200 | 0.5326 | - | - | - | - | - |
| 0.5005 | 22400 | 0.5384 | - | - | - | - | - |
| 0.5050 | 22600 | 0.5431 | - | - | - | - | - |
| 0.5095 | 22800 | 0.549 | - | - | - | - | - |
| 0.5139 | 23000 | 0.5429 | - | - | - | - | - |
| 0.5184 | 23200 | 0.5318 | - | - | - | - | - |
| 0.5229 | 23400 | 0.5288 | - | - | - | - | - |
| 0.5273 | 23600 | 0.5371 | - | - | - | - | - |
| 0.5318 | 23800 | 0.5307 | - | - | - | - | - |
| 0.5363 | 24000 | 0.5451 | - | - | - | - | - |
| 0.5407 | 24200 | 0.5462 | - | - | - | - | - |
| 0.5452 | 24400 | 0.5322 | - | - | - | - | - |
| 0.5497 | 24600 | 0.534 | - | - | - | - | - |
| 0.5542 | 24800 | 0.5266 | - | - | - | - | - |
| 0.5586 | 25000 | 0.5338 | - | - | - | - | - |
| 0.5631 | 25200 | 0.5252 | - | - | - | - | - |
| 0.5676 | 25400 | 0.5343 | - | - | - | - | - |
| 0.5720 | 25600 | 0.5297 | - | - | - | - | - |
| 0.5765 | 25800 | 0.5296 | - | - | - | - | - |
| 0.5810 | 26000 | 0.5205 | - | - | - | - | - |
| 0.5854 | 26200 | 0.5186 | - | - | - | - | - |
| 0.5899 | 26400 | 0.5299 | - | - | - | - | - |
| 0.5944 | 26600 | 0.5291 | - | - | - | - | - |
| 0.5988 | 26800 | 0.5325 | - | - | - | - | - |
| 0.6033 | 27000 | 0.5303 | - | - | - | - | - |
| 0.6078 | 27200 | 0.53 | - | - | - | - | - |
| 0.6122 | 27400 | 0.5241 | - | - | - | - | - |
| 0.6167 | 27600 | 0.524 | - | - | - | - | - |
| 0.6212 | 27800 | 0.5332 | - | - | - | - | - |
| 0.6257 | 28000 | 0.5182 | - | - | - | - | - |
| 0.6301 | 28200 | 0.5279 | - | - | - | - | - |
| 0.6346 | 28400 | 0.5236 | - | - | - | - | - |
| 0.6391 | 28600 | 0.523 | - | - | - | - | - |
| 0.6435 | 28800 | 0.526 | - | - | - | - | - |
| 0.6480 | 29000 | 0.52 | - | - | - | - | - |
| 0.6525 | 29200 | 0.53 | - | - | - | - | - |
| 0.6569 | 29400 | 0.5284 | - | - | - | - | - |
| 0.6614 | 29600 | 0.5204 | - | - | - | - | - |
| 0.6659 | 29800 | 0.5266 | - | - | - | - | - |
| 0.6703 | 30000 | 0.5172 | - | - | - | - | - |
| 0.6748 | 30200 | 0.5176 | - | - | - | - | - |
| 0.6793 | 30400 | 0.5151 | - | - | - | - | - |
| 0.6838 | 30600 | 0.5069 | - | - | - | - | - |
| 0.6882 | 30800 | 0.5164 | - | - | - | - | - |
| 0.6927 | 31000 | 0.5129 | - | - | - | - | - |
| 0.6972 | 31200 | 0.5144 | - | - | - | - | - |
| 0.7016 | 31400 | 0.5124 | - | - | - | - | - |
| 0.7061 | 31600 | 0.5167 | - | - | - | - | - |
| 0.7106 | 31800 | 0.5025 | - | - | - | - | - |
| 0.7150 | 32000 | 0.5066 | - | - | - | - | - |
| 0.7195 | 32200 | 0.5257 | - | - | - | - | - |
| 0.7240 | 32400 | 0.5086 | - | - | - | - | - |
| 0.7284 | 32600 | 0.5164 | - | - | - | - | - |
| 0.7329 | 32800 | 0.5058 | - | - | - | - | - |
| 0.7374 | 33000 | 0.52 | - | - | - | - | - |
| 0.7418 | 33200 | 0.5175 | - | - | - | - | - |
| 0.7463 | 33400 | 0.5038 | - | - | - | - | - |
| 0.7508 | 33600 | 0.5058 | - | - | - | - | - |
| 0.7553 | 33800 | 0.5075 | - | - | - | - | - |
| 0.7597 | 34000 | 0.5218 | - | - | - | - | - |
| 0.7642 | 34200 | 0.5174 | - | - | - | - | - |
| 0.7687 | 34400 | 0.4998 | - | - | - | - | - |
| 0.7731 | 34600 | 0.502 | - | - | - | - | - |
| 0.7776 | 34800 | 0.5205 | - | - | - | - | - |
| 0.7821 | 35000 | 0.5105 | - | - | - | - | - |
| 0.7865 | 35200 | 0.5026 | - | - | - | - | - |
| 0.7910 | 35400 | 0.5079 | - | - | - | - | - |
| 0.7955 | 35600 | 0.5066 | - | - | - | - | - |
| 0.7999 | 35800 | 0.5046 | - | - | - | - | - |
| 0.8044 | 36000 | 0.5139 | - | - | - | - | - |
| 0.8089 | 36200 | 0.5113 | - | - | - | - | - |
| 0.8134 | 36400 | 0.5098 | - | - | - | - | - |
| 0.8178 | 36600 | 0.5082 | - | - | - | - | - |
| 0.8223 | 36800 | 0.5052 | - | - | - | - | - |
| 0.8268 | 37000 | 0.5071 | - | - | - | - | - |
| 0.8312 | 37200 | 0.5047 | - | - | - | - | - |
| 0.8357 | 37400 | 0.5022 | - | - | - | - | - |
| 0.8402 | 37600 | 0.516 | - | - | - | - | - |
| 0.8446 | 37800 | 0.5069 | - | - | - | - | - |
| 0.8491 | 38000 | 0.5025 | - | - | - | - | - |
| 0.8536 | 38200 | 0.499 | - | - | - | - | - |
| 0.8580 | 38400 | 0.5117 | - | - | - | - | - |
| 0.8625 | 38600 | 0.5057 | - | - | - | - | - |
| 0.8670 | 38800 | 0.5068 | - | - | - | - | - |
| 0.8714 | 39000 | 0.5002 | - | - | - | - | - |
| 0.8759 | 39200 | 0.5134 | - | - | - | - | - |
| 0.8804 | 39400 | 0.5044 | - | - | - | - | - |
| 0.8849 | 39600 | 0.5035 | - | - | - | - | - |
| 0.8893 | 39800 | 0.5098 | - | - | - | - | - |
| 0.8938 | 40000 | 0.5015 | - | - | - | - | - |
| 0.8983 | 40200 | 0.5058 | - | - | - | - | - |
| 0.9027 | 40400 | 0.4927 | - | - | - | - | - |
| 0.9072 | 40600 | 0.5091 | - | - | - | - | - |
| 0.9117 | 40800 | 0.5095 | - | - | - | - | - |
| 0.9161 | 41000 | 0.5092 | - | - | - | - | - |
| 0.9206 | 41200 | 0.5072 | - | - | - | - | - |
| 0.9251 | 41400 | 0.5027 | - | - | - | - | - |
| 0.9295 | 41600 | 0.4961 | - | - | - | - | - |
| 0.9340 | 41800 | 0.4978 | - | - | - | - | - |
| 0.9385 | 42000 | 0.4993 | - | - | - | - | - |
| 0.9430 | 42200 | 0.488 | - | - | - | - | - |
| 0.9474 | 42400 | 0.5049 | - | - | - | - | - |
| 0.9519 | 42600 | 0.4993 | - | - | - | - | - |
| 0.9564 | 42800 | 0.5159 | - | - | - | - | - |
| 0.9608 | 43000 | 0.507 | - | - | - | - | - |
| 0.9653 | 43200 | 0.4965 | - | - | - | - | - |
| 0.9698 | 43400 | 0.5048 | - | - | - | - | - |
| 0.9742 | 43600 | 0.4972 | - | - | - | - | - |
| 0.9787 | 43800 | 0.4994 | - | - | - | - | - |
| 0.9832 | 44000 | 0.5003 | - | - | - | - | - |
| 0.9876 | 44200 | 0.4934 | - | - | - | - | - |
| 0.9921 | 44400 | 0.5025 | - | - | - | - | - |
| 0.9966 | 44600 | 0.5029 | - | - | - | - | - |
| -1 | -1 | - | 0.4867 (+0.1771) | 0.4669 (-0.0735) | 0.3760 (+0.0510) | 0.4360 (-0.0646) | 0.4263 (-0.0291) |
</details>
### Framework Versions
- Python: 3.11.0
- Sentence Transformers: 4.0.1
- Transformers: 4.50.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.2
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
bowilleatyou/9bcd6215-74b0-48e9-a74d-0578315627c0 | bowilleatyou | 2025-03-31T12:38:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T11:20:35Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jonjew/SharenTheFirstDescendant | Jonjew | 2025-03-31T12:35:32Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
]
| text-to-image | 2025-03-31T12:35:22Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
a realistic cinematic high angle film still of TFD-Sharen-Default-NoHelmet,
a female cyborg in profile in a futuristic, armored suit is standing with
hands on her hips on a dark smoky ground with intense red lighting and
background. The ground is dirty and wet concrete with reflections of her.
There are reflections from the wet reflective ground and puddles. She is
looking away.
output:
url: images/00063-4282680639.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: TFD-Sharen-Default-NoHelmet
license: unknown
---
# Sharen - The First Descendant
<Gallery />
## Model description
FROM https://civitai.com/models/748155/sharen-the-first-descendant-flux-lora?modelVersionId=836671
Trigger TFD-Sharen-Default-NoHelmet
Strength 0.7
A (FLUX) Character LoRA for Sharen (w/o Helmet) from The First Descendant -videogame.
Also check out my Hailey or Viessa Ultimate FLUX LoRAs
Triggerword: TFD-Sharen-Default-NoHelmet
Suggested Weight: 0.7
My Preview Images Generated on:
-flux1-dev-Q8_0.gguf + t5xxl_fp16 (ForgeUI)
-Euler, Simple
-960x1728 (or 1024x1600) + 1.2x Hires. Fix (4x-UltraSharp -upscaler)
-Distilled CFG Scale: 3.5
Add the following to your prompt to help you get the character:
TFD-Sharen-Default-NoHelmet, a female cyborg in a futuristic, armored suit
She has white makeup lines and silver lipstick. Her dark brown hair is styled in multiple, thick braids adorned with small, metallic rings.
She has a futuristic, armor-like suit that is predominantly metallic silver with gold accents and intricate, glowing blue details. The suit is form-fitting and covers her entire body, with a high collar that extends to her neck and a large chest piece that reveals a large glowing purple skin-tight design. The armor has a sleek, polished appearance with smooth, rounded edges and a slightly reflective surface, giving it a high-tech, futuristic aesthetic. The suit's form-fitting, aerodynamic shape, emphasizes her curvaceous physique
## Trigger words
You should use `TFD-Sharen-Default-NoHelmet` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/SharenTheFirstDescendant/tree/main) them in the Files & versions tab.
|
greatnomadicseal/ppo-LunarLander-200k-new-hyperparams | greatnomadicseal | 2025-03-31T12:34:06Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-03-31T12:33:49Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 273.91 +/- 27.92
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
JK303/q-Taxi-v3-learn-6-epsilon-small | JK303 | 2025-03-31T12:33:57Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-03-31T12:33:54Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-learn-6-epsilon-small
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="JK303/q-Taxi-v3-learn-6-epsilon-small", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kareem-oudeh/kareem-lora | kareem-oudeh | 2025-03-31T12:33:39Z | 0 | 0 | null | [
"license:other",
"region:us"
]
| null | 2025-03-31T11:44:23Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
tiz12/lora_model_llama_garbage | tiz12 | 2025-03-31T12:33:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T12:32:48Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** tiz12
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
yujia23/im_llama5 | yujia23 | 2025-03-31T12:30:03Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
]
| null | 2025-03-31T12:28:51Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
dgambettaphd/M_gen10_W_doc1000_synt64_MPP5-100_lastFalse | dgambettaphd | 2025-03-31T12:29:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T12:29:14Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nanocoh/test | nanocoh | 2025-03-31T12:29:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2309.00071",
"arxiv:2412.15115",
"base_model:Qwen/Qwen2.5-32B",
"base_model:finetune:Qwen/Qwen2.5-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T12:29:20Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/QWQ-32B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-32B
tags:
- chat
library_name: transformers
---
# QwQ-32B
<a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Introduction
QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini.
<p align="center">
<img width="100%" src="figures/benchmark.jpg">
</p>
**This repo contains the QwQ 32B model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training (Supervised Finetuning and Reinforcement Learning)
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 32.5B
- Number of Paramaters (Non-Embedding): 31.0B
- Number of Layers: 64
- Number of Attention Heads (GQA): 40 for Q and 8 for KV
- Context Length: Full 131,072 tokens
- For prompts exceeding 8,192 tokens in length, you must enable YaRN as outlined in [this section](#usage-guidelines).
**Note:** For the best experience, please review the [usage guidelines](#usage-guidelines) before deploying QwQ models.
You can try our [demo](https://huggingface.co/spaces/Qwen/QwQ-32B-Demo) or access QwQ models via [QwenChat](https://chat.qwen.ai).
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwq-32b/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
QwQ is based on Qwen2.5, whose code has been in the latest Hugging face `transformers`. We advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/QwQ-32B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many r's are in the word \"strawberry\""
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
### Usage Guidelines
To achieve optimal performance, we recommend the following settings:
1. **Enforce Thoughtful Output**: Ensure the model starts with "\<think\>\n" to prevent generating empty thinking content, which can degrade output quality. If you use `apply_chat_template` and set `add_generation_prompt=True`, this is already automatically implemented, but it may cause the response to lack the \<think\> tag at the beginning. This is normal behavior.
2. **Sampling Parameters**:
- Use Temperature=0.6, TopP=0.95, MinP=0 instead of Greedy decoding to avoid endless repetitions.
- Use TopK between 20 and 40 to filter out rare token occurrences while maintaining the diversity of the generated output.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may result in occasional language mixing and a slight decrease in performance.
3. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. This feature is already implemented in `apply_chat_template`.
4. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g.,`\"answer\": \"C\"`." in the prompt.
5. **Handle Long Inputs**: For inputs exceeding 8,192 tokens, enable [YaRN](https://arxiv.org/abs/2309.00071) to improve the model's ability to capture long-sequence information effectively.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM. Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwq-32b/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwq32b,
title = {QwQ-32B: Embracing the Power of Reinforcement Learning},
url = {https://qwenlm.github.io/blog/qwq-32b/},
author = {Qwen Team},
month = {March},
year = {2025}
}
@article{qwen2.5,
title={Qwen2.5 Technical Report},
author={An Yang and Baosong Yang and Beichen Zhang and Binyuan Hui and Bo Zheng and Bowen Yu and Chengyuan Li and Dayiheng Liu and Fei Huang and Haoran Wei and Huan Lin and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Yang and Jiaxi Yang and Jingren Zhou and Junyang Lin and Kai Dang and Keming Lu and Keqin Bao and Kexin Yang and Le Yu and Mei Li and Mingfeng Xue and Pei Zhang and Qin Zhu and Rui Men and Runji Lin and Tianhao Li and Tianyi Tang and Tingyu Xia and Xingzhang Ren and Xuancheng Ren and Yang Fan and Yang Su and Yichang Zhang and Yu Wan and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zihan Qiu},
journal={arXiv preprint arXiv:2412.15115},
year={2024}
}
``` |
mradermacher/Mistral-NeMo-Minitron-8B-Base-ChatML-GGUF | mradermacher | 2025-03-31T12:28:10Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Dans-DiscountModels/Mistral-NeMo-Minitron-8B-Base-ChatML",
"base_model:quantized:Dans-DiscountModels/Mistral-NeMo-Minitron-8B-Base-ChatML",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T12:10:43Z | ---
base_model: Dans-DiscountModels/Mistral-NeMo-Minitron-8B-Base-ChatML
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Dans-DiscountModels/Mistral-NeMo-Minitron-8B-Base-ChatML
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-ChatML-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base-ChatML.Q2_K.gguf) | Q2_K | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-ChatML-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base-ChatML.Q3_K_S.gguf) | Q3_K_S | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-ChatML-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base-ChatML.Q3_K_M.gguf) | Q3_K_M | 4.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-ChatML-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base-ChatML.Q3_K_L.gguf) | Q3_K_L | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-ChatML-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base-ChatML.IQ4_XS.gguf) | IQ4_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-ChatML-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base-ChatML.Q4_K_S.gguf) | Q4_K_S | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-ChatML-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base-ChatML.Q4_K_M.gguf) | Q4_K_M | 5.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-ChatML-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base-ChatML.Q5_K_S.gguf) | Q5_K_S | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-ChatML-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base-ChatML.Q5_K_M.gguf) | Q5_K_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-ChatML-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base-ChatML.Q6_K.gguf) | Q6_K | 7.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-ChatML-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base-ChatML.Q8_0.gguf) | Q8_0 | 9.0 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-ChatML-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base-ChatML.f16.gguf) | f16 | 16.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/NativeSQL-GGUF | mradermacher | 2025-03-31T12:26:26Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ahmedrizwan239/NativeSQL",
"base_model:quantized:ahmedrizwan239/NativeSQL",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T12:24:10Z | ---
base_model: ahmedrizwan239/NativeSQL
language: en
library_name: transformers
license: cc-by-sa-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ahmedrizwan239/NativeSQL
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NativeSQL-GGUF/resolve/main/NativeSQL.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/NativeSQL-GGUF/resolve/main/NativeSQL.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/NativeSQL-GGUF/resolve/main/NativeSQL.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NativeSQL-GGUF/resolve/main/NativeSQL.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/NativeSQL-GGUF/resolve/main/NativeSQL.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/NativeSQL-GGUF/resolve/main/NativeSQL.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NativeSQL-GGUF/resolve/main/NativeSQL.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NativeSQL-GGUF/resolve/main/NativeSQL.Q5_K_S.gguf) | Q5_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/NativeSQL-GGUF/resolve/main/NativeSQL.Q5_K_M.gguf) | Q5_K_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/NativeSQL-GGUF/resolve/main/NativeSQL.Q6_K.gguf) | Q6_K | 0.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NativeSQL-GGUF/resolve/main/NativeSQL.Q8_0.gguf) | Q8_0 | 0.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/NativeSQL-GGUF/resolve/main/NativeSQL.f16.gguf) | f16 | 0.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
eugeneseo/poca-SoccerTwos | eugeneseo | 2025-03-31T12:25:32Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2025-03-31T12:25:30Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: eugeneseo/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
JK303/q-Taxi-v3 | JK303 | 2025-03-31T12:24:10Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-03-31T12:23:41Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="JK303/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sonuiq415/Jskk | sonuiq415 | 2025-03-31T12:22:01Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-03-31T12:22:01Z | ---
license: apache-2.0
---
|
thanaphatt1/mental_not_good_yet | thanaphatt1 | 2025-03-31T12:19:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:scb10x/typhoon2-qwen2.5-7b-instruct",
"base_model:finetune:scb10x/typhoon2-qwen2.5-7b-instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T12:18:55Z | ---
base_model: scb10x/typhoon2-qwen2.5-7b-instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thanaphatt1
- **License:** apache-2.0
- **Finetuned from model :** scb10x/typhoon2-qwen2.5-7b-instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
iTroned/bert_weights_test | iTroned | 2025-03-31T12:18:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T12:11:28Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert_weights_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/dl7extb8)
# bert_weights_test
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2107
- Accuracy Offensive: 0.9441
- F1 Offensive: 0.9425
- Accuracy Targeted: 0.9441
- F1 Targeted: 0.9173
- Accuracy Stance: 0.9079
- F1 Stance: 0.8717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Offensive | F1 Offensive | Accuracy Targeted | F1 Targeted | Accuracy Stance | F1 Stance |
|:-------------:|:-----:|:----:|:---------------:|:------------------:|:------------:|:-----------------:|:-----------:|:---------------:|:---------:|
| 0.2784 | 1.0 | 1490 | 0.2279 | 0.9441 | 0.9425 | 0.9441 | 0.9173 | 0.9079 | 0.8717 |
| 0.2328 | 2.0 | 2980 | 0.2142 | 0.9441 | 0.9425 | 0.9441 | 0.9173 | 0.9079 | 0.8717 |
| 0.2129 | 3.0 | 4470 | 0.2107 | 0.9441 | 0.9425 | 0.9441 | 0.9173 | 0.9079 | 0.8717 |
| 0.2107 | 4.0 | 5960 | 0.2150 | 0.9441 | 0.9425 | 0.9441 | 0.9173 | 0.9079 | 0.8717 |
| 0.2151 | 5.0 | 7450 | 0.2135 | 0.9441 | 0.9425 | 0.9441 | 0.9173 | 0.9079 | 0.8717 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.0.1
- Tokenizers 0.21.1
|
BigSmiley7/a2c-PandaReachDense-v3 | BigSmiley7 | 2025-03-31T12:16:41Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-03-31T12:12:39Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.16 +/- 0.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Tgratzi/tma-topology-path-t5small-tuned | Tgratzi | 2025-03-31T12:16:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2025-03-31T01:57:30Z | ---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: tma-topology-path-t5small-tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tma-topology-path-t5small-tuned
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
AfroLogicInsect/emotionClassifier | AfroLogicInsect | 2025-03-31T12:12:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-03-31T12:08:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sens2010/law_llama3_8B_16bit | sens2010 | 2025-03-31T12:11:10Z | 0 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T10:12:03Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
chloeli/qwen-2.5-1.5B-instruct-sft-lora-countdown-search-long-1k | chloeli | 2025-03-31T12:11:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"sft",
"conversational",
"dataset:MelinaLaimon/stream-of-search",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T11:31:08Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
datasets: MelinaLaimon/stream-of-search
library_name: transformers
model_name: Qwen/Qwen2.5-1.5B-Instruct
tags:
- generated_from_trainer
- alignment-handbook
- trl
- sft
licence: license
---
# Model Card for Qwen/Qwen2.5-1.5B-Instruct
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [MelinaLaimon/stream-of-search](https://huggingface.co/datasets/MelinaLaimon/stream-of-search) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chloeli/qwen-2.5-1.5B-instruct-sft-lora-countdown-search-seq-1k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chloeli/huggingface/runs/26si2dtj)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/Persona-5B-i1-GGUF | mradermacher | 2025-03-31T12:10:49Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:TroyDoesAI/Persona-5B",
"base_model:quantized:TroyDoesAI/Persona-5B",
"license:artistic-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| null | 2025-03-31T10:01:04Z | ---
base_model: TroyDoesAI/Persona-5B
language:
- en
library_name: transformers
license: artistic-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/TroyDoesAI/Persona-5B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Persona-5B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-i1-GGUF/resolve/main/Persona-5B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-i1-GGUF/resolve/main/Persona-5B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-i1-GGUF/resolve/main/Persona-5B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-i1-GGUF/resolve/main/Persona-5B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-i1-GGUF/resolve/main/Persona-5B.i1-IQ2_S.gguf) | i1-IQ2_S | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-i1-GGUF/resolve/main/Persona-5B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-i1-GGUF/resolve/main/Persona-5B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.0 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-i1-GGUF/resolve/main/Persona-5B.i1-Q2_K.gguf) | i1-Q2_K | 2.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-i1-GGUF/resolve/main/Persona-5B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-i1-GGUF/resolve/main/Persona-5B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-i1-GGUF/resolve/main/Persona-5B.i1-IQ3_S.gguf) | i1-IQ3_S | 2.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-i1-GGUF/resolve/main/Persona-5B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-i1-GGUF/resolve/main/Persona-5B.i1-IQ3_M.gguf) | i1-IQ3_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-i1-GGUF/resolve/main/Persona-5B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-i1-GGUF/resolve/main/Persona-5B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-i1-GGUF/resolve/main/Persona-5B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-i1-GGUF/resolve/main/Persona-5B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 3.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-i1-GGUF/resolve/main/Persona-5B.i1-Q4_0.gguf) | i1-Q4_0 | 3.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-i1-GGUF/resolve/main/Persona-5B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 3.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-i1-GGUF/resolve/main/Persona-5B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 3.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-i1-GGUF/resolve/main/Persona-5B.i1-Q4_1.gguf) | i1-Q4_1 | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-i1-GGUF/resolve/main/Persona-5B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-i1-GGUF/resolve/main/Persona-5B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-i1-GGUF/resolve/main/Persona-5B.i1-Q6_K.gguf) | i1-Q6_K | 4.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Persona-5B-GGUF | mradermacher | 2025-03-31T12:10:49Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:TroyDoesAI/Persona-5B",
"base_model:quantized:TroyDoesAI/Persona-5B",
"license:artistic-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-03-31T08:44:32Z | ---
base_model: TroyDoesAI/Persona-5B
language:
- en
library_name: transformers
license: artistic-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TroyDoesAI/Persona-5B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Persona-5B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-GGUF/resolve/main/Persona-5B.Q2_K.gguf) | Q2_K | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-GGUF/resolve/main/Persona-5B.Q3_K_S.gguf) | Q3_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-GGUF/resolve/main/Persona-5B.Q3_K_M.gguf) | Q3_K_M | 2.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-GGUF/resolve/main/Persona-5B.Q3_K_L.gguf) | Q3_K_L | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-GGUF/resolve/main/Persona-5B.IQ4_XS.gguf) | IQ4_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-GGUF/resolve/main/Persona-5B.Q4_K_S.gguf) | Q4_K_S | 3.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-GGUF/resolve/main/Persona-5B.Q4_K_M.gguf) | Q4_K_M | 3.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-GGUF/resolve/main/Persona-5B.Q5_K_S.gguf) | Q5_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-GGUF/resolve/main/Persona-5B.Q5_K_M.gguf) | Q5_K_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-GGUF/resolve/main/Persona-5B.Q6_K.gguf) | Q6_K | 4.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-GGUF/resolve/main/Persona-5B.Q8_0.gguf) | Q8_0 | 5.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Persona-5B-GGUF/resolve/main/Persona-5B.f16.gguf) | f16 | 10.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
iTroned/modernbert_test | iTroned | 2025-03-31T12:08:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T12:01:29Z | ---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-base
tags:
- generated_from_trainer
model-index:
- name: modernbert_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/m9fls98g)
# modernbert_test
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2178
- Accuracy Offensive: 0.9441
- F1 Offensive: 0.9425
- Accuracy Targeted: 0.9441
- F1 Targeted: 0.9173
- Accuracy Stance: 0.9079
- F1 Stance: 0.8717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Offensive | F1 Offensive | Accuracy Targeted | F1 Targeted | Accuracy Stance | F1 Stance |
|:-------------:|:-----:|:----:|:---------------:|:------------------:|:------------:|:-----------------:|:-----------:|:---------------:|:---------:|
| 0.2732 | 1.0 | 1490 | 0.2292 | 0.9441 | 0.9425 | 0.9441 | 0.9173 | 0.9079 | 0.8717 |
| 0.2142 | 2.0 | 2980 | 0.2226 | 0.9441 | 0.9425 | 0.9441 | 0.9173 | 0.9079 | 0.8717 |
| 0.1991 | 3.0 | 4470 | 0.2178 | 0.9441 | 0.9425 | 0.9441 | 0.9173 | 0.9079 | 0.8717 |
| 0.1948 | 4.0 | 5960 | 0.2279 | 0.9441 | 0.9425 | 0.9441 | 0.9173 | 0.9079 | 0.8717 |
| 0.198 | 5.0 | 7450 | 0.2322 | 0.9441 | 0.9425 | 0.9441 | 0.9173 | 0.9079 | 0.8717 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.0.1
- Tokenizers 0.21.1
|
codrug/yelp_review_classifier | codrug | 2025-03-31T12:07:26Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-03-31T11:36:52Z | ---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: yelp_review_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yelp_review_classifier
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0600
- Accuracy: 0.57
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 1.0626 | 0.529 |
| No log | 2.0 | 250 | 1.0282 | 0.546 |
| No log | 3.0 | 375 | 1.0600 | 0.57 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
pmimpiyush/piyush-face-lora1 | pmimpiyush | 2025-03-31T12:07:20Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-03-31T12:07:12Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: mehtaji
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# piyush_face_lora1
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `mehtaji` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
jeezzzhusss/layoutlmv3-finetuned-cord_100 | jeezzzhusss | 2025-03-31T12:06:43Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2025-03-31T10:36:13Z | ---
library_name: transformers
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
model-index:
- name: layoutlmv3-finetuned-cord_100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-cord_100
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 1000
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
mjs227/grpo-sft-12-ep5-unmerged | mjs227 | 2025-03-31T12:06:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T12:06:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bebecu/SCHIELE_style_LoRA | bebecu | 2025-03-31T12:05:04Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2025-03-31T12:02:12Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: photo collage in CHERKASHIN style
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - bebecu/SCHIELE_style_LoRA
<Gallery />
## Model description
These are bebecu/SCHIELE_style_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use photo collage in CHERKASHIN style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](bebecu/SCHIELE_style_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
memevis/vim000 | memevis | 2025-03-31T12:04:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T12:02:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
langdai/gemma-2-2b-it-tool-think | langdai | 2025-03-31T12:03:57Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:google/gemma-2b-it",
"base_model:finetune:google/gemma-2b-it",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T11:26:13Z | ---
library_name: transformers
license: mit
base_model:
- google/gemma-2b-it
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model is merged with peft fine tuned model and it is standalone model.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [Liching]
- **Funded by:** [hobby]
- **Model type:** [text-generation]
- **Language(s) (NLP):** [En]
- **License:** [MIT]
- **Finetuned from model:** [gemma-2b-it]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
gemma-2b-it cannot be used for tool call and responds with reasoning like the recent developed Deepseek r1, these limitations are taken into consideration by fine tuning the model
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The Model is finetuned for 1 epoch due to which Bias and error are prone
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from transformers import pipeline
import torch
model_id = "langdai/gemma-2-2b-it-tool-think"
model = AutoModelForCausalLM.from_pretrained(model_id,
device_map="cuda:0",
) # For GPU
tokenizer = AutoTokenizer.from_pretrained(model_id)
# model.to(torch.bfloat16)
model.eval()
generator = pipeline("text-generation", model= model, tokenizer= tokenizer)
```
```python
prompt="""<bos><start_of_turn>human
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags.You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions.Here are the available tools:<tools> [{'type': 'function', 'function': {'name': 'convert_currency', 'description': 'Convert from one currency to another', 'parameters': {'type': 'object', 'properties': {'amount': {'type': 'number', 'description': 'The amount to convert'}, 'from_currency': {'type': 'string', 'description': 'The currency to convert from'}, 'to_currency': {'type': 'string', 'description': 'The currency to convert to'}}, 'required': ['amount', 'from_currency', 'to_currency']}}}, {'type': 'function', 'function': {'name': 'calculate_distance', 'description': 'Calculate the distance between two locations', 'parameters': {'type': 'object', 'properties': {'start_location': {'type': 'string', 'description': 'The starting location'}, 'end_location': {'type': 'string', 'description': 'The ending location'}}, 'required': ['start_location', 'end_location']}}}] </tools>Use the following pydantic model json schema for each tool call you will make: {'title': 'FunctionCall', 'type': 'object', 'properties': {'arguments': {'title': 'Arguments', 'type': 'object'}, 'name': {'title': 'Name', 'type': 'string'}}, 'required': ['arguments', 'name']}For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{tool_call}
</tool_call>Also, before making a call to a function take the time to plan the function to take. Make that thinking process between <think>{your thoughts}</think>
Hi, I need to convert 500 INR to Euros. Can you help me with that?<end_of_turn><eos>
<start_of_turn>model
<think>"""
output = generator([{"role": "user", "content": prompt}], max_new_tokens=512, return_full_text=False)[0]
print(output)
```
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [T4 24GPU]
- **Hours used:** [4 hours] |
nairaxo/orpheus_lora_tun | nairaxo | 2025-03-31T12:03:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T12:03:36Z | ---
base_model: unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** nairaxo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jonjew/KamenRiderPAseer | Jonjew | 2025-03-31T12:01:49Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
]
| text-to-image | 2025-03-31T12:01:23Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: KamenRider Gavv
output:
url: images/F857B96S95Z1NFG9486VKGZXG0.jpeg
- text: KamenRider Gavv
output:
url: images/QX6W5F2108MDFVPWKV5448XTW0.jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: KamenRider Gavv
license: unknown
---
# KamenRider-Collections-FLUX-PAseer
<Gallery />
## Model description
FROM https://civitai.com/models/705193/kamenrider-collections-flux-paseer?modelVersionId=831392
Trigger KamenRider Gavv
Strength 0.9
Kamen Rider Zero-One (仮面ライダーゼロワン, Kamen Raidā Zerowan) is a Japanese tokusatsu drama in Toei Company's Kamen Rider Series. It is the first series to debut during the Reiwa period and the thirty-fourth overall. The series premiered on September 1, 2019 and joined Kishiryu Sentai RyusoulgerIcon-crosswiki in the Super Hero Time line-up after the finale of Kamen Rider Zi-O. After Ryusoulger concluded, the series was joined by Mashin Sentai KiramagerIcon-crosswiki on March 8, 2020. After the finale of Zero-One, Kiramager was joined by Kamen Rider Saber in the Super Hero Time block.
Kamen Rider Takeshi Hongo/Shin (シン・仮面ライダー, Shin Kamen Raidā), also known internationally as Shin Masked Rider, is a Japanese superhero tokusatsu film serving as a reboot of the 1971 TV series Kamen Rider as well as to commemorate the franchise's 50th Anniversary. In the film, Takeshi Hongo and his cohort Ruriko Midorikawa fight against an evil terrorist organization that is responsible for his augmentation and stop them from conquering society.
Kamen Rider Gavv (仮面ライダーガヴ, Kamen Raidā Gavu) is a Japanese tokusatsu drama in Toei Company's Kamen Rider Series. It is the sixth series to debut in the Reiwa Era and the thirty-ninth overall.[1] The series premiered on September 1, 2024, joining Bakuage Sentai BoonboomgerIcon-crosswiki in the Super Hero TimeIcon-crosswiki lineup after the finale of Kamen Rider Gotchard.
## Trigger words
You should use `KamenRider Gavv` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/KamenRiderPAseer/tree/main) them in the Files & versions tab.
|
mradermacher/miscii-1020-GGUF | mradermacher | 2025-03-31T12:01:24Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:sthenno/miscii-1020",
"base_model:quantized:sthenno/miscii-1020",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-03-31T11:32:57Z | ---
base_model: sthenno/miscii-1020
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/sthenno/miscii-1020
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/miscii-1020-GGUF/resolve/main/miscii-1020.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/miscii-1020-GGUF/resolve/main/miscii-1020.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/miscii-1020-GGUF/resolve/main/miscii-1020.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/miscii-1020-GGUF/resolve/main/miscii-1020.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/miscii-1020-GGUF/resolve/main/miscii-1020.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/miscii-1020-GGUF/resolve/main/miscii-1020.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/miscii-1020-GGUF/resolve/main/miscii-1020.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/miscii-1020-GGUF/resolve/main/miscii-1020.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/miscii-1020-GGUF/resolve/main/miscii-1020.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/miscii-1020-GGUF/resolve/main/miscii-1020.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/miscii-1020-GGUF/resolve/main/miscii-1020.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
juniorVision/qwen2.5-14b-lr1e-5_customLoss_hanja15_wohanjainput | juniorVision | 2025-03-31T12:00:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T10:37:30Z | ---
base_model: Qwen/Qwen2.5-14B-Instruct
library_name: transformers
model_name: qwen2.5-14b-lr1e-5_customLoss_hanja15_wohanjainput
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for qwen2.5-14b-lr1e-5_customLoss_hanja15_wohanjainput
This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="juniorVision/qwen2.5-14b-lr1e-5_customLoss_hanja15_wohanjainput", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.46.3
- Pytorch: 2.5.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
RioShiina/qwen2.5-bakeneko-32b-instruct-exl2 | RioShiina | 2025-03-31T12:00:07Z | 0 | 0 | null | [
"ja",
"base_model:rinna/qwen2.5-bakeneko-32b-instruct",
"base_model:quantized:rinna/qwen2.5-bakeneko-32b-instruct",
"license:apache-2.0",
"region:us"
]
| null | 2025-03-31T11:59:54Z | ---
license: apache-2.0
base_model: rinna/qwen2.5-bakeneko-32b-instruct
base_model_relation: quantized
language:
- ja
---
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.2.8">turboderp's ExLlamaV2 v0.2.8</a> for quantization.
**[2.2bpw](https://huggingface.co/rioshiina/qwen2.5-bakeneko-32b-instruct-exl2/tree/2.2bpw)**
**[3.0bpw](https://huggingface.co/rioshiina/qwen2.5-bakeneko-32b-instruct-exl2/tree/3.0bpw)**
**[4.0bpw](https://huggingface.co/rioshiina/qwen2.5-bakeneko-32b-instruct-exl2/tree/4.0bpw)**
**[5.0bpw](https://huggingface.co/rioshiina/qwen2.5-bakeneko-32b-instruct-exl2/tree/5.0bpw)**
**[6.0bpw](https://huggingface.co/rioshiina/qwen2.5-bakeneko-32b-instruct-exl2/tree/6.0bpw)**
**[7.0bpw](https://huggingface.co/rioshiina/qwen2.5-bakeneko-32b-instruct-exl2/tree/7.0bpw)**
**[8.0bpw](https://huggingface.co/rioshiina/qwen2.5-bakeneko-32b-instruct-exl2/tree/8.0bpw)**
## Calibration Dataset
[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)
## qwen2.5-bakeneko-32b-instruct-exl2
- Model creator: [rinna](https://huggingface.co/rinna)
- Original model: [qwen2.5-bakeneko-32b-instruct](https://huggingface.co/rinna/qwen2.5-bakeneko-32b-instruct)
## License
[The Apache License, Version 2.0](https://opensource.org/license/apache-2-0) |
DevCG/mistral-7b-4bit-aura | DevCG | 2025-03-31T11:58:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-03-31T11:54:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yfarm01/sn29_arch31_c0 | yfarm01 | 2025-03-31T11:57:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T11:52:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
laicsiifes/swin-gportuguese-2 | laicsiifes | 2025-03-31T11:57:10Z | 161 | 4 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"image-to-text",
"pt",
"dataset:laicsiifes/flickr30k-pt-br",
"base_model:pierreguillou/gpt2-small-portuguese",
"base_model:finetune:pierreguillou/gpt2-small-portuguese",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
]
| image-to-text | 2024-09-01T18:06:19Z | ---
library_name: transformers
datasets:
- laicsiifes/flickr30k-pt-br
language:
- pt
metrics:
- bleu
- rouge
- meteor
- bertscore
base_model:
- pierreguillou/gpt2-small-portuguese
pipeline_tag: image-to-text
model-index:
- name: Swin-GPorTuguese-2
results:
- task:
name: Image Captioning
type: image-to-text
dataset:
name: Flickr30K
type: laicsiifes/flickr30k-pt-br
split: test
metrics:
- name: CIDEr-D
type: cider
value: 64.71
- name: BLEU@4
type: bleu
value: 23.15
- name: ROUGE-L
type: rouge
value: 39.39
- name: METEOR
type: meteor
value: 44.36
- name: BERTScore
type: bertscore
value: 71.7
license: mit
---
# 🎉 Swin-GPorTuguese-2 for Brazilian Portuguese Image Captioning
Swin-GPorTuguese-2 model trained for image captioning on [Flickr30K Portuguese](https://huggingface.co/datasets/laicsiifes/flickr30k-pt-br) (translated version using Google Translator API)
at resolution 224x224 and max sequence length of 1024 tokens.
## 🤖 Model Description
The Swin-GPorTuguese-2 is a type of Vision Encoder Decoder which leverage the checkpoints of the [Swin Transformer](https://huggingface.co/microsoft/swin-base-patch4-window7-224)
as encoder and the checkpoints of the [GPorTuguese-2](https://huggingface.co/pierreguillou/gpt2-small-portuguese) as decoder.
The encoder checkpoints come from Swin Trasnformer version pre-trained on ImageNet-1k at resolution 224x224.
The code used for training and evaluation is available at: https://github.com/laicsiifes/ved-transformer-caption-ptbr. In this work, Swin-GPorTuguese-2
was trained together with its buddy [Swin-DistilBERTimbau](https://huggingface.co/laicsiifes/swin-distilbert-flickr30k-pt-br).
Other models evaluated did not perform as well as Swin-DistilBERTimbau and Swin-GPorTuguese-2, namely: DeiT-BERTimbau,
DeiT-DistilBERTimbau, DeiT-GPorTuguese-2, Swin-BERTimbau, ViT-BERTimbau, ViT-DistilBERTimbau and ViT-GPorTuguese-2.
## 🧑💻 How to Get Started with the Model
Use the code below to get started with the model.
```python
import requests
from PIL import Image
from transformers import AutoTokenizer, AutoImageProcessor, VisionEncoderDecoderModel
# load a fine-tuned image captioning model and corresponding tokenizer and image processor
model = VisionEncoderDecoderModel.from_pretrained("laicsiifes/swin-gportuguese-2")
tokenizer = AutoTokenizer.from_pretrained("laicsiifes/swin-gportuguese-2")
image_processor = AutoImageProcessor.from_pretrained("laicsiifes/swin-gportuguese-2")
# preprocess an image
url = "http://images.cocodataset.org/val2014/COCO_val2014_000000458153.jpg"
image = Image.open(requests.get(url, stream=True).raw)
pixel_values = image_processor(image, return_tensors="pt").pixel_values
# generate caption
generated_ids = model.generate(pixel_values)
generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
```python
import matplotlib.pyplot as plt
# plot image with caption
plt.imshow(image)
plt.axis("off")
plt.title(generated_text)
plt.show()
```

## 📈 Results
The evaluation metrics CIDEr-D, BLEU@4, ROUGE-L, METEOR and BERTScore
(using [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased)) are abbreviated as C, B@4, RL, M and BS, respectively.
|Model|Dataset|Eval. Split|C|B@4|RL|M|BS|
|:---:|:------:|:--------:|:-----:|:----:|:-----:|:----:|:-------:|
|Swin-DistilBERTimbau|Flickr30K Portuguese|test|66.73|24.65|39.98|44.71|72.30|
|Swin-GPorTuguese-2|Flickr30K Portuguese|test|64.71|23.15|39.39|44.36|71.70|
## 📋 BibTeX entry and citation info
```bibtex
@inproceedings{bromonschenkel2024comparative,
title={A Comparative Evaluation of Transformer-Based Vision Encoder-Decoder Models for Brazilian Portuguese Image Captioning},
author={Bromonschenkel, Gabriel and Oliveira, Hil{\'a}rio and Paix{\~a}o, Thiago M},
booktitle={2024 37th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)},
pages={1--6},
year={2024},
organization={IEEE}
}
``` |
mradermacher/R1_Virtuoso-10B-v0.1e1-GGUF | mradermacher | 2025-03-31T11:57:03Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Nitral-Archive/R1_Virtuoso-10B-v0.1e1",
"base_model:quantized:Nitral-Archive/R1_Virtuoso-10B-v0.1e1",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-03-31T11:33:34Z | ---
base_model: Nitral-Archive/R1_Virtuoso-10B-v0.1e1
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Nitral-Archive/R1_Virtuoso-10B-v0.1e1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/R1_Virtuoso-10B-v0.1e1-GGUF/resolve/main/R1_Virtuoso-10B-v0.1e1.Q2_K.gguf) | Q2_K | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/R1_Virtuoso-10B-v0.1e1-GGUF/resolve/main/R1_Virtuoso-10B-v0.1e1.Q3_K_S.gguf) | Q3_K_S | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/R1_Virtuoso-10B-v0.1e1-GGUF/resolve/main/R1_Virtuoso-10B-v0.1e1.Q3_K_M.gguf) | Q3_K_M | 5.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/R1_Virtuoso-10B-v0.1e1-GGUF/resolve/main/R1_Virtuoso-10B-v0.1e1.Q3_K_L.gguf) | Q3_K_L | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/R1_Virtuoso-10B-v0.1e1-GGUF/resolve/main/R1_Virtuoso-10B-v0.1e1.IQ4_XS.gguf) | IQ4_XS | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/R1_Virtuoso-10B-v0.1e1-GGUF/resolve/main/R1_Virtuoso-10B-v0.1e1.Q4_K_S.gguf) | Q4_K_S | 6.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/R1_Virtuoso-10B-v0.1e1-GGUF/resolve/main/R1_Virtuoso-10B-v0.1e1.Q4_K_M.gguf) | Q4_K_M | 6.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/R1_Virtuoso-10B-v0.1e1-GGUF/resolve/main/R1_Virtuoso-10B-v0.1e1.Q5_K_S.gguf) | Q5_K_S | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/R1_Virtuoso-10B-v0.1e1-GGUF/resolve/main/R1_Virtuoso-10B-v0.1e1.Q5_K_M.gguf) | Q5_K_M | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/R1_Virtuoso-10B-v0.1e1-GGUF/resolve/main/R1_Virtuoso-10B-v0.1e1.Q6_K.gguf) | Q6_K | 8.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/R1_Virtuoso-10B-v0.1e1-GGUF/resolve/main/R1_Virtuoso-10B-v0.1e1.Q8_0.gguf) | Q8_0 | 11.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/R1_Virtuoso-10B-v0.1e1-GGUF/resolve/main/R1_Virtuoso-10B-v0.1e1.f16.gguf) | f16 | 20.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
burtenshaw/gemma-3-12b-it-codeforces-SFT-Q4_K_M-GGUF | burtenshaw | 2025-03-31T11:56:51Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"dataset:open-r1/codeforces-cots",
"base_model:burtenshaw/gemma-3-12b-it-codeforces-SFT",
"base_model:quantized:burtenshaw/gemma-3-12b-it-codeforces-SFT",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-03-31T11:56:10Z | ---
base_model: burtenshaw/gemma-3-12b-it-codeforces-SFT
datasets: open-r1/codeforces-cots
library_name: transformers
model_name: gemma-3-12b-it-codeforces-SFT
tags:
- generated_from_trainer
- trl
- sft
- llama-cpp
- gguf-my-repo
licence: license
---
# burtenshaw/gemma-3-12b-it-codeforces-SFT-Q4_K_M-GGUF
This model was converted to GGUF format from [`burtenshaw/gemma-3-12b-it-codeforces-SFT`](https://huggingface.co/burtenshaw/gemma-3-12b-it-codeforces-SFT) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/burtenshaw/gemma-3-12b-it-codeforces-SFT) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo burtenshaw/gemma-3-12b-it-codeforces-SFT-Q4_K_M-GGUF --hf-file gemma-3-12b-it-codeforces-sft-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo burtenshaw/gemma-3-12b-it-codeforces-SFT-Q4_K_M-GGUF --hf-file gemma-3-12b-it-codeforces-sft-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo burtenshaw/gemma-3-12b-it-codeforces-SFT-Q4_K_M-GGUF --hf-file gemma-3-12b-it-codeforces-sft-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo burtenshaw/gemma-3-12b-it-codeforces-SFT-Q4_K_M-GGUF --hf-file gemma-3-12b-it-codeforces-sft-q4_k_m.gguf -c 2048
```
|
Badribn/mistral_financial_sentiment_data_expert | Badribn | 2025-03-31T11:56:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T11:56:18Z | ---
base_model: mistralai/Mistral-7B-v0.1
library_name: transformers
model_name: mistral_financial_sentiment_data_expert
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for mistral_financial_sentiment_data_expert
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Badribn/mistral_financial_sentiment_data_expert", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.3
- Pytorch: 2.5.1
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
RichardErkhov/Magpie-Align_-_MagpieLM-8B-Chat-v0.1-8bits | RichardErkhov | 2025-03-31T11:54:41Z | 0 | 0 | null | [
"safetensors",
"llama",
"arxiv:2406.08464",
"arxiv:2411.07133",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-03-31T11:47:47Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
MagpieLM-8B-Chat-v0.1 - bnb 8bits
- Model creator: https://huggingface.co/Magpie-Align/
- Original model: https://huggingface.co/Magpie-Align/MagpieLM-8B-Chat-v0.1/
Original model description:
---
library_name: transformers
license: llama3.1
base_model: Magpie-Align/MagpieLM-8B-SFT-v0.1
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
datasets:
- Magpie-Align/MagpieLM-SFT-Data-v0.1
- Magpie-Align/MagpieLM-DPO-Data-v0.1
model-index:
- name: MagpieLM-8B-Chat-v0.1
results: []
---

# 🐦 MagpieLM-8B-Chat-v0.1
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://api.wandb.ai/links/uw-nsl/0s1eegy2)
## 🧐 About This Model
*Model full name: Llama3.1-MagpieLM-8B-Chat-v0.1*
This model is an aligned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B), which achieves state-of-the-art performance among open-aligned SLMs. It even outperforms larger open-weight models including Llama-3-8B-Instruct, Llama-3.1-8B-Instruct, Qwen-2-7B-Instruct, and Gemma-2-9B-it.
We apply the following standard alignment pipeline with two carefully crafted synthetic datasets.
We first perform SFT using [Magpie-Align/MagpieLM-SFT-Data-v0.1](https://huggingface.co/datasets/Magpie-Align/MagpieLM-SFT-Data-v0.1).
* **SFT Model Checkpoint:** [Magpie-Align/MagpieLM-8B-SFT-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-8B-SFT-v0.1)
We then perform DPO on the [Magpie-Align/MagpieLM-DPO-Data-v0.1](https://huggingface.co/datasets/Magpie-Align/MagpieLM-DPO-Data-v0.1) dataset.
## 🔥 Benchmark Performance
Greedy Decoding
- **Alpaca Eval 2: 58.18 (LC), 62.38 (WR)**
- **Arena Hard: 48.4**
- **WildBench WB Score (v2.0625): 44.72**
**Benchmark Performance Compare to Other SOTA SLMs**

## 👀 Other Information
**License**: Please follow [Meta Llama 3.1 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE).
**Conversation Template**: Please use the Llama 3 chat template for the best performance.
**Limitations**: This model primarily understands and generates content in English. Its outputs may contain factual errors, logical inconsistencies, or reflect biases present in the training data. While the model aims to improve instruction-following and helpfulness, it isn't specifically designed for complex reasoning tasks, potentially leading to suboptimal performance in these areas. Additionally, the model may produce unsafe or inappropriate content, as no specific safety training were implemented during the alignment process.
## 🧐 How to use it?
[](https://huggingface.co/spaces/flydust/MagpieLM-8B)
Please update transformers to the latest version by `pip install git+https://github.com/huggingface/transformers`.
You can then run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
```python
import transformers
import torch
model_id = "MagpieLM-8B-Chat-v0.1"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are Magpie, a friendly AI assistant."},
{"role": "user", "content": "Who are you?"},
]
outputs = pipeline(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
---
# Alignment Pipeline
The detailed alignment pipeline is as follows.
## Stage 1: Supervised Fine-tuning
We use [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) for SFT. Please refer to the model card of [SFT checkpoint](https://huggingface.co/Magpie-Align/MagpieLM-8B-SFT-v0.1) and below for detailed configurations.
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: meta-llama/Meta-Llama-3.1-8B
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
chat_template: llama3
load_in_8bit: false
load_in_4bit: false
strict: false
main_process_port: 0
datasets:
- path: Magpie-Align/MagpieLM-SFT-Data-v0.1
type: sharegpt
conversation: llama3
dataset_prepared_path: last_run_prepared
val_set_size: 0.001
output_dir: axolotl_out/MagpieLM-8B-SFT-v0.1
sequence_len: 8192
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
wandb_project: SynDa
wandb_entity:
wandb_watch:
wandb_name: MagpieLM-8B-SFT-v0.1
wandb_log_model:
hub_model_id: Magpie-Align/MagpieLM-8B-SFT-v0.1
gradient_accumulation_steps: 32
micro_batch_size: 1
num_epochs: 2
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 2e-5
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_ratio: 0.1
evals_per_epoch: 5
eval_table_size:
saves_per_epoch:
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
pad_token: <|end_of_text|>
```
</details><br>
## Stage 2: Direct Preference Optimization
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-07
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.686 | 0.0653 | 100 | 0.6856 | -0.0491 | -0.0616 | 0.6480 | 0.0125 | -471.3315 | -478.8181 | -0.7034 | -0.7427 |
| 0.6218 | 0.1306 | 200 | 0.6277 | -0.6128 | -0.7720 | 0.6960 | 0.1591 | -542.3653 | -535.1920 | -0.7771 | -0.8125 |
| 0.5705 | 0.1959 | 300 | 0.5545 | -2.4738 | -3.0052 | 0.7270 | 0.5314 | -765.6894 | -721.2881 | -0.7894 | -0.8230 |
| 0.4606 | 0.2612 | 400 | 0.5081 | -2.6780 | -3.3782 | 0.7560 | 0.7002 | -802.9893 | -741.7116 | -0.6813 | -0.7247 |
| 0.4314 | 0.3266 | 500 | 0.4787 | -3.6697 | -4.6026 | 0.7630 | 0.9329 | -925.4283 | -840.8740 | -0.6189 | -0.6691 |
| 0.449 | 0.3919 | 600 | 0.4533 | -3.7414 | -4.8019 | 0.7820 | 1.0604 | -945.3563 | -848.0514 | -0.6157 | -0.6681 |
| 0.4538 | 0.4572 | 700 | 0.4350 | -4.3858 | -5.6549 | 0.7890 | 1.2690 | -1030.6561 | -912.4920 | -0.5789 | -0.6331 |
| 0.35 | 0.5225 | 800 | 0.4186 | -4.7129 | -6.1662 | 0.8010 | 1.4533 | -1081.7843 | -945.1964 | -0.5778 | -0.6347 |
| 0.4153 | 0.5878 | 900 | 0.4108 | -4.9836 | -6.5320 | 0.7970 | 1.5484 | -1118.3677 | -972.2631 | -0.5895 | -0.6474 |
| 0.3935 | 0.6531 | 1000 | 0.3999 | -4.4303 | -5.9370 | 0.8110 | 1.5067 | -1058.8646 | -916.9379 | -0.6016 | -0.6598 |
| 0.3205 | 0.7184 | 1100 | 0.3950 | -5.1884 | -6.8827 | 0.8010 | 1.6943 | -1153.4371 | -992.7452 | -0.5846 | -0.6452 |
| 0.3612 | 0.7837 | 1200 | 0.3901 | -5.0426 | -6.7179 | 0.8040 | 1.6753 | -1136.9619 | -978.1701 | -0.6046 | -0.6637 |
| 0.3058 | 0.8490 | 1300 | 0.3877 | -5.1224 | -6.8428 | 0.8040 | 1.7204 | -1149.4465 | -986.1475 | -0.6087 | -0.6690 |
| 0.3467 | 0.9144 | 1400 | 0.3871 | -5.2335 | -6.9809 | 0.8090 | 1.7474 | -1163.2629 | -997.2610 | -0.6071 | -0.6672 |
| 0.3197 | 0.9797 | 1500 | 0.3867 | -5.1502 | -6.8793 | 0.8080 | 1.7291 | -1153.0979 | -988.9237 | -0.6120 | -0.6722 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
<details><summary>See alignment handbook configs</summary>
```yaml
# Customized Configs
model_name_or_path: Magpie-Align/MagpieLM-8B-SFT-v0.1
hub_model_id: Magpie-Align/MagpieLM-8B-Chat-v0.1
output_dir: alignment_handbook_out/MagpieLM-8B-Chat-v0.1
run_name: MagpieLM-8B-Chat-v0.1
dataset_mixer:
Magpie-Align/MagpieLM-DPO-Data-v0.1: 1.0
dataset_splits:
- train
- test
preprocessing_num_workers: 24
# DPOTrainer arguments
bf16: true
beta: 0.01
learning_rate: 2.0e-7
gradient_accumulation_steps: 16
per_device_train_batch_size: 2
per_device_eval_batch_size: 4
num_train_epochs: 1
max_length: 2048
max_prompt_length: 1800
warmup_ratio: 0.1
logging_steps: 1
lr_scheduler_type: cosine
optim: adamw_torch
torch_dtype: null
# use_flash_attention_2: true
do_eval: true
evaluation_strategy: steps
eval_steps: 100
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: False
log_level: info
push_to_hub: true
save_total_limit: 0
seed: 42
report_to:
- wandb
```
</details><be>
## 📚 Citation
If you find the model, data, or code useful, please cite:
```
@article{xu2024magpie,
title={Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing},
author={Zhangchen Xu and Fengqing Jiang and Luyao Niu and Yuntian Deng and Radha Poovendran and Yejin Choi and Bill Yuchen Lin},
year={2024},
eprint={2406.08464},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{xu2024stronger,
title={Stronger Models are NOT Stronger Teachers for Instruction Tuning},
author={Xu, Zhangchen and Jiang, Fengqing and Niu, Luyao and Lin, Bill Yuchen and Poovendran, Radha},
journal={arXiv preprint arXiv:2411.07133},
year={2024}
}
```
**Contact**
Questions? Contact:
- [Zhangchen Xu](https://zhangchenxu.com/) [zxu9 at uw dot edu], and
- [Bill Yuchen Lin](https://yuchenlin.xyz/) [yuchenlin1995 at gmail dot com]
|
mlfoundations-dev/qwen2-5_code_ablate_duplications_6 | mlfoundations-dev | 2025-03-31T11:53:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-30T19:17:56Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: qwen2-5_code_ablate_duplications_6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2-5_code_ablate_duplications_6
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/code_ablate_duplications_6 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 3
- total_train_batch_size: 96
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.0.2
- Tokenizers 0.20.3
|
CCappy/Qwen2.5-14B-Drone | CCappy | 2025-03-31T11:52:27Z | 18 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-30T17:00:10Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
license: other
---
# Qwen2.5-Drone
This is a fine-tuned version of Qwen2.5-14B-Instruct specialized for drone control and robotic vision tasks.
## Model Details
- **Base Model**: Qwen/Qwen2.5-14B-Instruct
- **Training Technique**: QLoRA fine-tuning
- **Training Dataset**: Custom drone command dataset with ReAct reasoning
- **Use Cases**: Drone control, object detection processing, robotic vision tasks
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("CCappy/Qwen2.5-Drone", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("CCappy/Qwen2.5-Drone", trust_remote_code=True)
# Function definitions example
function_defs = '''[
{"type": "function", "function": {"name": "get_objects_in_view"...}}
...
]'''
# Example query
query = "Find a red car in the scene and hover 2 meters above it."
# Format input
input_text = f"<|im_start|>user\nYou have access to the following functions:\n{function_defs}\n\n{query}<|im_end|>\n<|im_start|>assistant\n"
# Generate response
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(
inputs["input_ids"],
max_new_tokens=1024,
temperature=0.7,
)
response = tokenizer.decode(outputs[0], skip_special_tokens=False)
print(response)
```
## Training Details
The model was fine-tuned using QLoRA with the following parameters:
- LoRA rank: 16
- LoRA alpha: 32
- Dropout: 0.05
- Target modules: q_proj, k_proj, v_proj, o_proj (attention modules)
- Training epochs: 3
- Batch size: 8
- Per device train batch size: 8
- Per device eval batch size: 8
- Gradient accumulation steps: 2
- Learning rate: 2e-4
- Mixed precision: bf16
- Optimizer: adamw_torch_fused
- Max gradient norm: 1.5
- Dataloader workers: 4
- Group by length: True
## Limitations
This model is specialized for drone control scenarios and may not perform as well on general tasks as the base Qwen2.5 model. The model inherits limitations from the base Qwen2.5-14B-Instruct model, including potential biases and hallucinations, but has been optimized for interpreting and responding to commands in drone control contexts.
This model requires specific functions that are still being tested and set up
|
Eddie-3dd13/ToolACE-2-Llama-3.1-8B-Q8_0-GGUF | Eddie-3dd13 | 2025-03-31T11:49:55Z | 0 | 0 | null | [
"gguf",
"code",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:Team-ACE/ToolACE",
"base_model:Team-ACE/ToolACE-2-Llama-3.1-8B",
"base_model:quantized:Team-ACE/ToolACE-2-Llama-3.1-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-03-31T11:49:17Z | ---
base_model: Team-ACE/ToolACE-2-Llama-3.1-8B
datasets:
- Team-ACE/ToolACE
language:
- en
license: apache-2.0
metrics:
- accuracy
tags:
- code
- llama-cpp
- gguf-my-repo
---
# Eddie-3dd13/ToolACE-2-Llama-3.1-8B-Q8_0-GGUF
This model was converted to GGUF format from [`Team-ACE/ToolACE-2-Llama-3.1-8B`](https://huggingface.co/Team-ACE/ToolACE-2-Llama-3.1-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Team-ACE/ToolACE-2-Llama-3.1-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Eddie-3dd13/ToolACE-2-Llama-3.1-8B-Q8_0-GGUF --hf-file toolace-2-llama-3.1-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Eddie-3dd13/ToolACE-2-Llama-3.1-8B-Q8_0-GGUF --hf-file toolace-2-llama-3.1-8b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Eddie-3dd13/ToolACE-2-Llama-3.1-8B-Q8_0-GGUF --hf-file toolace-2-llama-3.1-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Eddie-3dd13/ToolACE-2-Llama-3.1-8B-Q8_0-GGUF --hf-file toolace-2-llama-3.1-8b-q8_0.gguf -c 2048
```
|
dashaberyozova1/cherakshin_style_LoRA | dashaberyozova1 | 2025-03-31T11:48:49Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2025-03-31T11:44:55Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: photo collage in CHERKASHIN style
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - dashaberyozova1/cherakshin_style_LoRA
<Gallery />
## Model description
These are dashaberyozova1/cherakshin_style_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use photo collage in CHERKASHIN style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](dashaberyozova1/cherakshin_style_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
langdai/gemma-2-2B-it_think_funcion_call | langdai | 2025-03-31T11:48:23Z | 0 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-29T11:37:06Z | ---
base_model: google/gemma-2-2b-it
library_name: transformers
model_name: gemma-2-2B-it_think_funcion_call
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-2-2B-it_think_funcion_call
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
The model is finetuned to include think while generating the response, distilled trained using Deepseek r1 model. The model can also be used for tool call.
This is a peft model, under Quick Start section the procedure to load correctly is defined.
## Quick start
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from transformers import pipeline
import torch
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
)
peft_model_id = "langdai/gemma-2-2B-it_think_funcion_call"
# device = "auto"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path,
device_map="cuda:0",
)
tokenizer = AutoTokenizer.from_pretrained(peft_model_id)
model.resize_token_embeddings(len(tokenizer))
model = PeftModel.from_pretrained(model, peft_model_id)
model.to(torch.bfloat16)
model.eval()
generator = pipeline("text-generation", model= model, tokenizer= tokenizer)
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Output
output for the question in quickstart.
```
This is a fascinating thought experiment! If I had a time machine that could only travel to the past or the future once, I would choose to go to the future. Here's why:
1. **The Unknown:** The future is a realm of possibilities. It's filled with potential advancements, discoveries, and events that we can't even imagine yet. By going to the future, I could witness these advancements firsthand and gain a deeper understanding of the world.
2. **The Impact:** The past is often romanticized, but it's also a time of great loss and hardship. By going to the
```
output for conversation template
```
prompt = """<bos><start_of_turn>human
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags.You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions.Here are the available tools:<tools> [{'type': 'function', 'function': {'name': 'convert_currency', 'description': 'Convert from one currency to another', 'parameters': {'type': 'object', 'properties': {'amount': {'type': 'number', 'description': 'The amount to convert'}, 'from_currency': {'type': 'string', 'description': 'The currency to convert from'}, 'to_currency': {'type': 'string', 'description': 'The currency to convert to'}}, 'required': ['amount', 'from_currency', 'to_currency']}}}, {'type': 'function', 'function': {'name': 'calculate_distance', 'description': 'Calculate the distance between two locations', 'parameters': {'type': 'object', 'properties': {'start_location': {'type': 'string', 'description': 'The starting location'}, 'end_location': {'type': 'string', 'description': 'The ending location'}}, 'required': ['start_location', 'end_location']}}}] </tools>Use the following pydantic model json schema for each tool call you will make: {'title': 'FunctionCall', 'type': 'object', 'properties': {'arguments': {'title': 'Arguments', 'type': 'object'}, 'name': {'title': 'Name', 'type': 'string'}}, 'required': ['arguments', 'name']}For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{tool_call}
</tool_call>Also, before making a call to a function take the time to plan the function to take. Make that thinking process between <think>{your thoughts}</think>
Hi, I need to convert 500 USD to Euros. Can you help me with that?<end_of_turn><eos>
<start_of_turn>model
<think>"""
```
response
```
<bos><start_of_turn>human
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags.You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions.Here are the available tools:<tools> [{'type': 'function', 'function': {'name': 'convert_currency', 'description': 'Convert from one currency to another', 'parameters': {'type': 'object', 'properties': {'amount': {'type': 'number', 'description': 'The amount to convert'}, 'from_currency': {'type': 'string', 'description': 'The currency to convert from'}, 'to_currency': {'type': 'string', 'description': 'The currency to convert to'}}, 'required': ['amount', 'from_currency', 'to_currency']}}}, {'type': 'function', 'function': {'name': 'calculate_distance', 'description': 'Calculate the distance between two locations', 'parameters': {'type': 'object', 'properties': {'start_location': {'type': 'string', 'description': 'The starting location'}, 'end_location': {'type': 'string', 'description': 'The ending location'}}, 'required': ['start_location', 'end_location']}}}] </tools>Use the following pydantic model json schema for each tool call you will make: {'title': 'FunctionCall', 'type': 'object', 'properties': {'arguments': {'title': 'Arguments', 'type': 'object'}, 'name': {'title': 'Name', 'type': 'string'}}, 'required': ['arguments', 'name']}For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{tool_call}
</tool_call>Also, before making a call to a function take the time to plan the function to take. Make that thinking process between <think>{your thoughts}</think>
Hi, I need to convert 500 USD to Euros. Can you help me with that?<end_of_turn><eos>
<start_of_turn>model
<think>Okay, so the user is asking to convert 500 USD to Euros. I need to figure out how to respond using the available functions. Let me look at the tools provided. There's a function called 'convert_currency' which seems perfect for this task. It requires an amount, the source currency, and the target currency. The user provided all these details: 500 USD to EUR.
I should structure the function call with the correct parameters. The amount is 500, from_currency is USD, and to_currency is EUR. I need to make sure the JSON format is correct, including the function name and the arguments object.
I think that's all I need to do. The function should handle the conversion accurately.
</think><tool_call>
{'name': 'convert_currency', 'arguments': {'amount': 500, 'from_currency': 'USD', 'to_currency': 'EUR'}}
</tool_call><end_of_turn><eos>
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.3
- Pytorch: 2.2.1+cu121
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
stacklok/Qwen2.5-Coder-7B-Instruct-stitch-express-chat | stacklok | 2025-03-31T11:46:28Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-03-27T14:02:10Z | ---
base_model: unsloth/qwen2.5-coder-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** stacklok
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-coder-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
guangce/polydiff | guangce | 2025-03-31T11:41:34Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-03-31T09:54:38Z | 
 |
tinycompany/MI-QTK-81K | tinycompany | 2025-03-31T11:40:44Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T11:40:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
csukuangfj/k2fsa-zipformer-bilingual-zh-en-t | csukuangfj | 2025-03-31T11:39:09Z | 0 | 4 | null | [
"onnx",
"license:apache-2.0",
"region:us"
]
| null | 2023-11-17T10:18:37Z | ---
license: apache-2.0
---
## Chinese-English ASR model using k2-zipformer-streaming
### AIShell-1 and Wenetspeech testset results with modified-beam-search streaming decode using epoch-12.pt
| decode_chunk_len | AIShell-1 | TEST_NET | TEST_MEETING |
|------------------|-----------|----------|--------------|
| 64 | 4.79 | 11.6 | 12.64 ||
### Training and decoding commands
```
nohup ./pruned_transducer_stateless7_streaming/train.py --world-size 8 --num-epochs 30 --start-epoch 1 \
--num-encoder-layers 2,2,2,2,2 \
--feedforward-dims 768,768,768,768,768 \
--nhead 4,4,4,4,4 \
--encoder-dims 256,256,256,256,256 \
--attention-dims 192,192,192,192,192 \
--encoder-unmasked-dims 192,192,192,192,192 \
--exp-dir pruned_transducer_stateless7_streaming/exp --max-duration 360 \
> pruned_transducer_stateless7_streaming/exp/nohup.zipformer &
nohup ./pruned_transducer_stateless7_streaming/decode.py --epoch 12 --avg 1 \
--num-encoder-layers 2,2,2,2,2 \
--feedforward-dims 768,768,768,768,768 \
--nhead 4,4,4,4,4 \
--encoder-dims 256,256,256,256,256 \
--attention-dims 192,192,192,192,192 \
--encoder-unmasked-dims 192,192,192,192,192 \
--exp-dir pruned_transducer_stateless7_streaming/exp \
--max-duration 600 --decode-chunk-len 32 --decoding-method modified_beam_search --beam-size 4 \
> nohup.zipformer.deocode &
```
### Model unit is char+bpe as `data/lang_char_bpe/tokens.txt`
### Tips
some k2-fsa version and parameter is
```
{'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.23.2', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'a74f59dba1863cd9386ba4d8815850421260eee7', 'k2-git-date': 'Fri Dec 2 08:32:22 2022', 'lhotse-version': '1.5.0.dev+git.8ce38fc.dirty', 'torch-version': '1.11.0+cu113', 'torch-cuda-available': True, 'torch-cuda-version': '11.3', 'python-version': '3.7', 'icefall-git-branch': 'master', 'icefall-git-sha1': '600f387-dirty', 'icefall-git-date': 'Thu Feb 9 15:16:04 2023', 'icefall-path': '/opt/conda/lib/python3.7/site-packages', 'k2-path': '/opt/conda/lib/python3.7/site-packages/k2/__init__.py', 'lhotse-path': '/opt/conda/lib/python3.7/site-packages/lhotse/__init__.py', 'hostname': 'worker-0', 'IP address': '127.0.0.1'}, 'world_size': 8, 'master_port': 12354, 'tensorboard': True, 'num_epochs': 30, 'start_epoch': 11, 'start_batch': 0, 'exp_dir': PosixPath('pruned_transducer_stateless7_streaming/exp_t'), 'lang_dir': 'data/lang_char_bpe', 'base_lr': 0.01, 'lr_batches': 5000, 'lr_epochs': 3.5, 'context_size': 2, 'prune_range': 5, 'lm_scale': 0.25, 'am_scale': 0.0, 'simple_loss_scale': 0.5, 'seed': 42, 'print_diagnostics': False, 'inf_check': False, 'save_every_n': 2000, 'keep_last_k': 30, 'average_period': 200, 'use_fp16': False, 'num_encoder_layers': '2,2,2,2,2', 'feedforward_dims': '768,768,768,768,768', 'nhead': '4,4,4,4,4', 'encoder_dims': '256,256,256,256,256', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '192,192,192,192,192', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'short_chunk_size': 50, 'num_left_chunks': 4, 'decode_chunk_len': 32, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 360, 'bucketing_sampler': True, 'num_buckets': 300, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'return_cuts': True, 'num_workers': 8, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'training_subset': 'mix', 'blank_id': 0, 'vocab_size': 6254}
```
|
hyunwoo612/bad_good_comment_v6_GGUF | hyunwoo612 | 2025-03-31T11:39:05Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:Bllossom/llama-3.2-Korean-Bllossom-3B",
"base_model:quantized:Bllossom/llama-3.2-Korean-Bllossom-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-03-31T11:38:49Z | ---
base_model: Bllossom/llama-3.2-Korean-Bllossom-3B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hyunwoo612
- **License:** apache-2.0
- **Finetuned from model :** Bllossom/llama-3.2-Korean-Bllossom-3B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sens2010/llaw_llama3_8B_4bit_GUFF | sens2010 | 2025-03-31T11:38:54Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T11:37:35Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sens2010
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jonjew/MonsterHunterValstraxArmor | Jonjew | 2025-03-31T11:38:01Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
]
| text-to-image | 2025-03-31T11:37:56Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
cinematic photo HDR photo of mhrvvalstraxarmor, a man wear a red and sliver
costume, holding a great tachi,Cinematic photography of a realistic one man
solo cosplaying,Her eyes, a vivid shade of emerald yellow and The setting is
indoors at a bustling convention hall, with other attendees in various
costumes in the background. The hall is decorated with banners and booths
showcasing different fandoms, creating a vibrant and energetic atmosphere
typical of major cosplay events like Comic-Con or Anime Expo. sparkle with a
blend of enthusiasm and playfulness, capturing the essence of the beloved
character, male In this dynamic portrait ,standing in to full body
featuring stunning sparkle with a blend of enthusiasm and playfulness,
capturing the essence of the beloved charactercurtain,
<lora:cosplay_flux_V1:0.6> ggasia11k cosplay concept
<lora:mhrvvalstraxarmor:1> . High dynamic range, vivid, rich details, clear
shadows and highlights, realistic, intense, enhanced contrast, highly
detailed, 35mm photograph, film, bokeh, professional, 4k, highly detailed
parameters:
negative_prompt: >-
flat, low contrast, oversaturated, underexposed, overexposed, blurred,
noisy, drawing, painting, crayon, sketch, graphite, impressionist, noisy,
blurry, soft, deformed, ugly, Watermark, Text, censored, deformed, bad
anatomy, disfigured, poorly drawn face, mutated, extra limb, ugly, poorly
drawn hands, missing limb, floating limbs, disconnected limbs,
disconnected head, malformed hands, long neck, mutated hands and fingers,
bad hands, missing fingers, cropped, worst quality, low quality, mutation,
poorly drawn, huge calf, bad hands, fused hand, missing hand, disappearing
arms, disappearing thigh, disappearing calf, disappearing legs, missing
fingers, fused fingers, abnormal eye proportion, Abnormal hands, abnormal
legs, abnormal feet, abnormal fingers
output:
url: images/image (31).png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: a man wear a red and sliver costume and red shoes, MHRVValstraxARMOR
license: unknown
---
# Monster Hunter Valstrax Armor
<Gallery />
## Model description
FROM https://civitai.com/models/1021518/monster-hunter-valstrax-armor-cosplay?modelVersionId=1145468
Trigger a man wear a red and sliver costume and red shoes, MHRVValstraxARMOR
Strength 1
TAGS WORS: MHRVValstraxARMOR, a man wear a red and sliver costume and red shoes
optional: back of the armor
if you like this model please give a like, thank you
## Trigger words
You should use `a man wear a red and sliver costume and red shoes` to trigger the image generation.
You should use `MHRVValstraxARMOR` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/MonsterHunterValstraxArmor/tree/main) them in the Files & versions tab.
|
csukuangfj/k2fsa-zipformer-chinese-english-mixed | csukuangfj | 2025-03-31T11:37:59Z | 0 | 1 | null | [
"onnx",
"license:apache-2.0",
"region:us"
]
| null | 2023-11-17T10:20:12Z | ---
license: apache-2.0
---
This repo is forked from
https://huggingface.co/pfluo/k2fsa-zipformer-chinese-english-mixed
## Chinese-English ASR model using k2-zipformer-streaming
### AIShell-1 and Wenetspeech testset results with modified-beam-search streaming decode using epoch-14.pt
| decode_method | AIShell-1 | TEST_NET | TEST_MEETING |
|------------------|-----------|----------|--------------|
| modified_beam_search | 3.04 | 8.97 | 8.83 ||
| + SF (lm scale=0.1) | 2.77 | - | - ||
### Training and decoding commands
```
nohup ./pruned_transducer_stateless7_streaming/train.py --world-size 8 --num-epochs 30 --start-epoch 1 --feedforward-dims "1024,1024,1536,1536,1024" --exp-dir pruned_transducer_stateless7_streaming/exp --max-duration 360 > pruned_transducer_stateless7_streaming/exp/nohup.zipformer &
nohup ./pruned_transducer_stateless7_streaming/decode.py --epoch 14 --avg 1 --exp-dir ./pruned_transducer_stateless7_streaming/exp --max-duration 600 --decode-chunk-len 64 --decoding-method modified_beam_search --beam-size 4 > nohup.zipformer.deocode &
```
### Model unit is char+bpe as `data/lang_char_bpe/tokens.txt`
### Tips
some k2-fsa version and parameter is
```
{'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'lo
g_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.23.2', 'k2-build
-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'a74f59dba1863cd9386ba4d8815850421260eee7', 'k2-git-date': 'Fri Dec 2 08:32:22 2022', 'lhotse-version': '1.5.0.dev+gi
t.8ce38fc.dirty', 'torch-version': '1.11.0+cu113', 'torch-cuda-available': True, 'torch-cuda-version': '11.3', 'python-version': '3.7', 'icefall-git-branch': 'master', 'icef
all-git-sha1': '11b08db-dirty', 'icefall-git-date': 'Thu Jan 12 10:19:21 2023', 'icefall-path': '/opt/conda/lib/python3.7/site-packages', 'k2-path': '/opt/conda/lib/python3.
7/site-packages/k2/__init__.py', 'lhotse-path': '/opt/conda/lib/python3.7/site-packages/lhotse/__init__.py', 'hostname': 'xxx', 'IP add
ress': 'x.x.x.x'}, 'world_size': 8, 'master_port': 12354, 'tensorboard': True, 'num_epochs': 30, 'start_epoch': 1, 'start_batch': 0, 'exp_dir': PosixPath('pruned_trans
ducer_stateless7_streaming/exp'), 'lang_dir': 'data/lang_char_bpe', 'base_lr': 0.01, 'lr_batches': 5000, 'lr_epochs': 3.5, 'context_size': 2, 'prune_range': 5, 'lm_scale': 0
.25, 'am_scale': 0.0, 'simple_loss_scale': 0.5, 'seed': 42, 'print_diagnostics': False, 'inf_check': False, 'save_every_n': 2000, 'keep_last_k': 30, 'average_period': 200, '
use_fp16': False, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,1536,1536,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_
dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder
_dim': 512, 'joiner_dim': 512, 'short_chunk_size': 50, 'num_left_chunks': 4, 'decode_chunk_len': 32, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 360, 'bucketing
_sampler': True, 'num_buckets': 300, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'return_cuts': True, 'num_wor
kers': 8, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'training_subset': '12k_hour', 'blank_id': 0, 'vocab_size': 6254}
``` |
nice2mitya/a_1345817538 | nice2mitya | 2025-03-31T11:35:28Z | 0 | 0 | null | [
"license:other",
"region:us"
]
| null | 2025-03-31T11:08:41Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
yandex/YandexGPT-5-Lite-8B-instruct-GGUF | yandex | 2025-03-31T11:34:29Z | 6 | 3 | null | [
"gguf",
"ru",
"en",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| null | 2025-03-28T16:24:33Z | ---
license: other
license_name: yandexgpt-5-lite-8b
license_link: LICENSE
language:
- ru
- en
---
# YandexGPT-5-Lite-Instruct-GGUF
Квантизованная версия YandexGPT 5 Lite 8B Instruct. Информация о модели доступна в основном репозитории: [YandexGPT-5-Lite-8B-instruct](https://huggingface.co/yandex/YandexGPT-5-Lite-8B-instruct).
**UPD**: Мы обновили `.gguf` файл в изначальном репозитории на наиболее близкий по качеству к оригинальной модели.
## llama.cpp
Для начала нужно собрать [llama.cpp](https://github.com/ggml-org/llama.cpp) (или обновить, если уже есть):
```bash
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build --config Release
cd ..
```
Можно ускорить сборку если позволяют ресурсы: `cmake --build build --config Release -j 10`
Запуск модели в интерактивном режиме:
```bash
llama.cpp/build/bin/llama-cli -m YandexGPT-5-Lite-8B-instruct-Q4_K_M.gguf
```
Мы рекомендуем использовать интерактивный режим только для ознакомления с моделью.
Запуск сервера:
```bash
llama.cpp/build/bin/llama-server -m YandexGPT-5-Lite-8B-instruct-Q4_K_M.gguf -c 32768
```
Если позволяют ресурсы, можно ускорить инференс, добавив `-t 10`.
## Ollama
Запуск модели в интерактивном режиме:
```bash
ollama run yandex/YandexGPT-5-Lite-8B-instruct-GGUF
```
Мы рекомендуем использовать интерактивный режим только для ознакомления с моделью.
## Особенности шаблона
Мы используем нестандартный шаблон диалога — модель обучена генерировать только одну реплику после последовательности `Ассистент:[SEP]`, завершая её токеном `</s>`. При этом диалог в промпте может быть любой длины.
Это приводит к тому, что в интерактивном режиме модель может выдавать результаты, отличающиеся от вызова модели в режиме генерации на фиксированном диалоге. Поэтому мы рекомендуем использовать интерактивный режим только для ознакомления с моделью.
|
Jonjew/wargreymonDigimon | Jonjew | 2025-03-31T11:33:31Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
]
| text-to-image | 2025-03-31T11:33:26Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
cinematic photo HDR photo of wargreymonex,a man is wearing a costume in a
gold, orange ,silver, red and armor with horns and horns on his head,dragon
orange feet, ,orange hand, a man holding a sword in his both hands, a man is
walking through the viewing ,a robot with a Shiny golden and red body and
purple wings,his detailed eyes, a vivid shade of emerald aqua green and
shiny blue, capturing the essence of the beloved character, He is wearing a
costume,(The setting is indoors at a bustling convention hall with excellent
Sufficient light bright light:1.1), with other attendees in various costumes
in the background. The hall is decorated with banners and booths showcasing
different fandoms, creating a vibrant and energetic atmosphere typical of
major cosplay events like Comic-Con or Anime Expo. sparkle with a blend of
enthusiasm and playfulness, capturing the essence of the beloved character,
male In this dynamic portrait ,standing in to full body featuring stunning
sparkle with a blend of enthusiasm and playfulness, capturing the essence of
the beloved charactercurtain. High dynamic range, vivid, rich details, clear
shadows and highlights, realistic, intense, enhanced contrast, highly
detailed, 35mm photograph, film, bokeh, professional, 4k, highly
detailed,<lora:wargreymonex:0.8> . High dynamic range, vivid, rich details,
clear shadows and highlights, realistic, intense, enhanced contrast, highly
detailed, 35mm photograph, film, bokeh, professional, 4k, highly detailed,
Photorealistic, Hyperrealistic, Hyperdetailed, analog style, soft lighting,
subsurface scattering, realistic, heavy shadow, masterpiece, best quality,
ultra realistic, 8k, golden ratio, Intricate, High Detail, film photography,
soft focus
parameters:
negative_prompt: >-
Action Figure, flat, low contrast, oversaturated, underexposed,
overexposed, blurred, noisy, drawing, painting, crayon, sketch, graphite,
impressionist, noisy, blurry, soft, deformed, ugly, Watermark, Text,
censored, deformed, bad anatomy, disfigured, poorly drawn face, mutated,
extra limb, ugly, poorly drawn hands, missing limb, floating limbs,
disconnected limbs, disconnected head, malformed hands, long neck, mutated
hands and fingers, bad hands, missing fingers, cropped, worst quality, low
quality, mutation, poorly drawn, huge calf, bad hands, fused hand, missing
hand, disappearing arms, disappearing thigh, disappearing calf,
disappearing legs, missing fingers, fused fingers, abnormal eye
proportion, Abnormal hands, abnormal legs, abnormal feet, abnormal
fingers
output:
url: images/image (100).png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: wargreymonex
license: unknown
---
# wargreymon Digimon
<Gallery />
## Model description
FROM https://civitai.com/models/1031215/wargreymon-digimon?modelVersionId=1156643
Trigger wargreymonex
Strength 1
## Trigger words
You should use `wargreymonex` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/wargreymonDigimon/tree/main) them in the Files & versions tab.
|
lucasschott/HalfCheetah-v5-SAC | lucasschott | 2025-03-31T11:32:26Z | 11 | 0 | stable-baselines3 | [
"stable-baselines3",
"safetensors",
"reinforcement-learning",
"halfcheetah",
"mujoco",
"sb3",
"sac",
"control",
"license:mpl-2.0",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-03-26T15:41:54Z | ---
license: mpl-2.0
tags:
- reinforcement-learning
- stable-baselines3
- halfcheetah
- mujoco
- sb3
- sac
- control
model-index:
- name: HalfCheetah-v5-SAC
results:
- task:
type: reinforcement-learning
name: Reinforcement Learning
dataset:
name: HalfCheetah-v5
type: gymnasium
metrics:
- type: mean_reward
value: 14170.55 +/- 137.28
---
# SAC Agent for HalfCheetah-v5
This is a Soft Actor Critic (SAC) agent trained on the `HalfCheetah-v5` environment using Stable Baselines 3.
## Hyperparameters
See `config.json` for details.
## Requirements
- **Python:** 3.10
### Dependencies
```txt
gymnasium==1.0.0
gymnasium[mujoco]
torch==2.4.0
stable_baselines3==2.4.1
```
## How to Load
```python
from huggingface_hub import hf_hub_download
from stable_baselines3 import SAC
model_path = hf_hub_download(repo_id="lucasschott/HalfCheetah-v5-SAC", filename="model.zip")
agent = SAC.load(model_path)
``` |
mradermacher/INTELLECT-1-i1-GGUF | mradermacher | 2025-03-31T11:32:24Z | 21 | 1 | transformers | [
"transformers",
"gguf",
"en",
"dataset:PrimeIntellect/fineweb-edu",
"dataset:PrimeIntellect/fineweb",
"dataset:PrimeIntellect/StackV1-popular",
"dataset:mlfoundations/dclm-baseline-1.0-parquet",
"dataset:open-web-math/open-web-math",
"base_model:PrimeIntellect/INTELLECT-1",
"base_model:quantized:PrimeIntellect/INTELLECT-1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
]
| null | 2025-03-29T18:48:03Z | ---
base_model: PrimeIntellect/INTELLECT-1
datasets:
- PrimeIntellect/fineweb-edu
- PrimeIntellect/fineweb
- PrimeIntellect/StackV1-popular
- mlfoundations/dclm-baseline-1.0-parquet
- open-web-math/open-web-math
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/PrimeIntellect/INTELLECT-1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/INTELLECT-1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/INTELLECT-1-i1-GGUF/resolve/main/INTELLECT-1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/INTELLECT-1-i1-GGUF/resolve/main/INTELLECT-1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/INTELLECT-1-i1-GGUF/resolve/main/INTELLECT-1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/INTELLECT-1-i1-GGUF/resolve/main/INTELLECT-1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/INTELLECT-1-i1-GGUF/resolve/main/INTELLECT-1.i1-IQ2_S.gguf) | i1-IQ2_S | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/INTELLECT-1-i1-GGUF/resolve/main/INTELLECT-1.i1-IQ2_M.gguf) | i1-IQ2_M | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/INTELLECT-1-i1-GGUF/resolve/main/INTELLECT-1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.8 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/INTELLECT-1-i1-GGUF/resolve/main/INTELLECT-1.i1-Q2_K.gguf) | i1-Q2_K | 4.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/INTELLECT-1-i1-GGUF/resolve/main/INTELLECT-1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/INTELLECT-1-i1-GGUF/resolve/main/INTELLECT-1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/INTELLECT-1-i1-GGUF/resolve/main/INTELLECT-1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/INTELLECT-1-i1-GGUF/resolve/main/INTELLECT-1.i1-IQ3_S.gguf) | i1-IQ3_S | 4.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/INTELLECT-1-i1-GGUF/resolve/main/INTELLECT-1.i1-IQ3_M.gguf) | i1-IQ3_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/INTELLECT-1-i1-GGUF/resolve/main/INTELLECT-1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 5.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/INTELLECT-1-i1-GGUF/resolve/main/INTELLECT-1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/INTELLECT-1-i1-GGUF/resolve/main/INTELLECT-1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/INTELLECT-1-i1-GGUF/resolve/main/INTELLECT-1.i1-Q4_0.gguf) | i1-Q4_0 | 6.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/INTELLECT-1-i1-GGUF/resolve/main/INTELLECT-1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 6.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/INTELLECT-1-i1-GGUF/resolve/main/INTELLECT-1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 6.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/INTELLECT-1-i1-GGUF/resolve/main/INTELLECT-1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 6.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/INTELLECT-1-i1-GGUF/resolve/main/INTELLECT-1.i1-Q4_1.gguf) | i1-Q4_1 | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/INTELLECT-1-i1-GGUF/resolve/main/INTELLECT-1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/INTELLECT-1-i1-GGUF/resolve/main/INTELLECT-1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/INTELLECT-1-i1-GGUF/resolve/main/INTELLECT-1.i1-Q6_K.gguf) | i1-Q6_K | 8.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
stacklok/Qwen2.5-Coder-7B-Instruct-stitch-tools-chat | stacklok | 2025-03-31T11:32:24Z | 99 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-03-26T11:39:21Z | ---
base_model: unsloth/qwen2.5-coder-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** stacklok
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-coder-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mranderson991/hp_mother | mranderson991 | 2025-03-31T11:31:06Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-03-31T11:12:32Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: hp_mother
---
# Hp_Mother
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `hp_mother` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('mranderson991/hp_mother', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
RJTPP/stage2-deepseek1.5b-1k-gguf | RJTPP | 2025-03-31T11:28:26Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-03-31T11:28:01Z | ---
base_model: unsloth/deepseek-r1-distill-qwen-1.5b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** RJTPP
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-1.5b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lesso11/7a1057aa-6045-4be6-a64e-fcfa05a7cb77 | lesso11 | 2025-03-31T11:27:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:tlphams/gollm-12.8b-instruct-v2.3",
"base_model:adapter:tlphams/gollm-12.8b-instruct-v2.3",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2025-03-31T06:40:31Z | ---
library_name: peft
license: cc-by-nc-4.0
base_model: tlphams/gollm-12.8b-instruct-v2.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7a1057aa-6045-4be6-a64e-fcfa05a7cb77
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tlphams/gollm-12.8b-instruct-v2.3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1cb5206d289a131c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1cb5206d289a131c_train_data.json
type:
field_input: commit_message
field_instruction: func_before
field_output: func_after
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso11/7a1057aa-6045-4be6-a64e-fcfa05a7cb77
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000211
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/1cb5206d289a131c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 110
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 86ec4df3-ce88-4caa-9e18-54ffeeaf277c
wandb_project: 11a
wandb_run: your_name
wandb_runid: 86ec4df3-ce88-4caa-9e18-54ffeeaf277c
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7a1057aa-6045-4be6-a64e-fcfa05a7cb77
This model is a fine-tuned version of [tlphams/gollm-12.8b-instruct-v2.3](https://huggingface.co/tlphams/gollm-12.8b-instruct-v2.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0104
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000211
- train_batch_size: 4
- eval_batch_size: 4
- seed: 110
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 0.5899 |
| 0.0736 | 0.0780 | 500 | 0.0104 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso03/3fdf570b-599c-4762-af23-06b1695a5d4c | lesso03 | 2025-03-31T11:25:54Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:tlphams/gollm-12.8b-instruct-v2.3",
"base_model:adapter:tlphams/gollm-12.8b-instruct-v2.3",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2025-03-31T06:40:40Z | ---
library_name: peft
license: cc-by-nc-4.0
base_model: tlphams/gollm-12.8b-instruct-v2.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3fdf570b-599c-4762-af23-06b1695a5d4c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tlphams/gollm-12.8b-instruct-v2.3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1cb5206d289a131c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1cb5206d289a131c_train_data.json
type:
field_input: commit_message
field_instruction: func_before
field_output: func_after
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso03/3fdf570b-599c-4762-af23-06b1695a5d4c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000203
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/1cb5206d289a131c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 30
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 86ec4df3-ce88-4caa-9e18-54ffeeaf277c
wandb_project: 03a
wandb_run: your_name
wandb_runid: 86ec4df3-ce88-4caa-9e18-54ffeeaf277c
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3fdf570b-599c-4762-af23-06b1695a5d4c
This model is a fine-tuned version of [tlphams/gollm-12.8b-instruct-v2.3](https://huggingface.co/tlphams/gollm-12.8b-instruct-v2.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0108
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000203
- train_batch_size: 4
- eval_batch_size: 4
- seed: 30
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 0.5900 |
| 0.09 | 0.0780 | 500 | 0.0108 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
TOTORONG/gemma-3_embedding | TOTORONG | 2025-03-31T11:25:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-27b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-27b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T11:22:05Z | ---
base_model: unsloth/gemma-3-27b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** TOTORONG
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-27b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
massimilianowosz/gemma-3-1b-it-ita-fp16 | massimilianowosz | 2025-03-31T11:25:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T11:24:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JuniperChinenye/Boost2 | JuniperChinenye | 2025-03-31T11:25:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T11:23:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
juice0630/9_thanos_1e-4_3000_unet | juice0630 | 2025-03-31T11:24:38Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2025-03-31T10:54:19Z | ---
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
instance_prompt: a photo of <9> thanos
tags:
- text-to-image
- diffusers
- lora
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA DreamBooth - juice0630/9_thanos_1e-4_3000_unet
These are LoRA adaption weights for stable-diffusion-v1-5/stable-diffusion-v1-5. The weights were trained on a photo of <9> thanos using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
BigSmiley7/ppo-ML-Agents-Pyramids | BigSmiley7 | 2025-03-31T11:24:26Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2025-03-31T11:24:23Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: BigSmiley7/ppo-ML-Agents-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Jonjew/SaintSeiyaSoulofGoldLoki | Jonjew | 2025-03-31T11:24:10Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
]
| text-to-image | 2025-03-31T11:23:57Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
cinematic photo (Saintseiyal0kireal:1), Cinematic photography of a
realistic man solo cosplaying,his eyes, a vivid shade of emerald red and
The setting is indoors at a bustling convention hall, with other attendees
in various costumes in the background. The hall is decorated with banners
and booths showcasing different fandoms, creating a vibrant and energetic
atmosphere typical of major cosplay events like Comic-Con or Anime Expo. a
male is wearing a full black and gold armor costume with golden and black
hightlight wings ,having a purple bilateral hair. wearing black gloves with
palm fist and rest pose on his arms , he is wearing gold helmet highlighting
the subtle makeup and realisic black and golden armor that enhances her
great thin and skinyy body shape, including a soft blush and glossy lips
that mirror the character's aesthetic.In this dynamic portrait, sitting on
the ground in front of a crown people expo background , capturing the
essence of the beloved character, male In this dynamic portrait , adorned
with delicate hair accessories on one sides. His eyes, a vivid shade of
emerald red, sparkle with a blend of enthusiasm and playfulness, capturing
the essence of the beloved charactercurtain, <lora:cosplay_flux_V1:0.7>,
ggasia11k cosplay concept ,<lora:saint-seiya-loki:0.8>, 35mm photograph,
film, bokeh, professional, 4k, highly detailed, RAW candid cinema, 16mm,
color graded portra 400 film, remarkable color, ultra realistic, textured
skin, remarkable detailed pupils, realistic dull skin noise, visible skin
detail, skin fuzz, dry skin, shot with cinematic camera
parameters:
negative_prompt: >-
drawing, painting, crayon, sketch, graphite, impressionist, noisy, blurry,
soft, deformed, ugly, Watermark, Text, censored, deformed, bad anatomy,
disfigured, poorly drawn face, mutated, extra limb, ugly, poorly drawn
hands, missing limb, floating limbs, disconnected limbs, disconnected
head, malformed hands, long neck, mutated hands and fingers, bad hands,
missing fingers, cropped, worst quality, low quality, mutation, poorly
drawn, huge calf, bad hands, fused hand, missing hand, disappearing arms,
disappearing thigh, disappearing calf, disappearing legs, missing fingers,
fused fingers, abnormal eye proportion, Abnormal hands, abnormal legs,
abnormal feet, abnormal fingers
output:
url: images/image - 2024-10-19T181016.134.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Saintseiyal0kireal
license: unknown
---
# Saint Seiya Soul of Gold Loki
<Gallery />
## Model description
FROM https://civitai.com/models/286559/saint-seiya-soul-of-gold-loki?modelVersionId=972376
Trigger Saintseiyal0kireal
Strength 0.8
TAGS WORS: SAINTSAIYALOKI, 1boy, hair over one eye, gold armor, purple hair, gloves, red eyes, helmet, wings, armor, dark skin
if you like this model please give a like, thank you
## Trigger words
You should use `Saintseiyal0kireal` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/SaintSeiyaSoulofGoldLoki/tree/main) them in the Files & versions tab.
|
BigSmiley7/ppo-SnowballTarget | BigSmiley7 | 2025-03-31T11:24:10Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2025-03-31T11:24:07Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: BigSmiley7/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
yandex/YandexGPT-5-Lite-8B-instruct | yandex | 2025-03-31T11:23:59Z | 0 | 16 | null | [
"safetensors",
"llama",
"ru",
"en",
"base_model:yandex/YandexGPT-5-Lite-8B-pretrain",
"base_model:finetune:yandex/YandexGPT-5-Lite-8B-pretrain",
"license:other",
"region:us"
]
| null | 2025-03-28T08:12:30Z | ---
license: other
license_name: yandexgpt-5-lite-8b
license_link: LICENSE
language:
- ru
- en
base_model:
- yandex/YandexGPT-5-Lite-8B-pretrain
---
# YandexGPT-5-Lite-Instruct
Instruct-версия большой языковой модели YandexGPT 5 Lite на 8B параметров с длиной контекста 32k токенов. Также в отдельном [репозитории](https://huggingface.co/yandex/YandexGPT-5-Lite-8B-instruct-GGUF) опубликована квантизованная версия модели в формате GGUF.
Обучена на базе [YandexGPT 5 Lite Pretrain](https://huggingface.co/yandex/YandexGPT-5-Lite-8B-pretrain), без использования весов каких-либо сторонних моделей. Алайнмент Lite-версии совпадает с алайнментом YandexGPT 5 Pro и состоит из этапов SFT и RLHF (более подробно о них — в [статье](https://habr.com/ru/companies/yandex/articles/885218/) на Хабре).
Задавайте вопросы в discussions.
## Бенчмарки
По результатам международных бенчмарков и их адаптаций для русского языка, YandexGPT 5 Lite вплотную приблизилась к аналогам (Llama-3.1-8B-instruct и Qwen-2.5-7B-instruct) и превосходит их в ряде сценариев, в том числе — в знании русской культуры и фактов.
<img src="https://habrastorage.org/r/w1560/getpro/habr/upload_files/6b5/eb4/9ea/6b5eb49ea757bc124c938717b21f1cf7.png" alt="Таблица бенчмарков" width="100%"/>
MMLU — 5-shot, все остальные бенчмарки — 0-shot.
## Как использовать
Модель можно запустить через HF Transformers:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_NAME = "yandex/YandexGPT-5-Lite-8B-instruct"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
device_map="cuda",
torch_dtype="auto",
)
messages = [{"role": "user", "content": "Для чего нужна токенизация?"}]
input_ids = tokenizer.apply_chat_template(
messages, tokenize=True, return_tensors="pt"
).to("cuda")
outputs = model.generate(input_ids, max_new_tokens=1024)
print(tokenizer.decode(outputs[0][input_ids.size(1) :], skip_special_tokens=True))
```
Или через vLLM:
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
MODEL_NAME = "yandex/YandexGPT-5-Lite-8B-instruct"
sampling_params = SamplingParams(
temperature=0.3,
top_p=0.9,
max_tokens=1024,
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
llm = LLM(
MODEL_NAME,
tensor_parallel_size=1,
)
messages = [{"role": "user", "content": "В чем смысл жизни?"}]
input_ids = tokenizer.apply_chat_template(
messages, tokenize=True, add_generation_prompt=True
)[1:] # remove bos
text = tokenizer.decode(input_ids)
outputs = llm.generate(text, use_tqdm=False, sampling_params=sampling_params)
print(tokenizer.decode(outputs[0].outputs[0].token_ids, skip_special_tokens=True))
```
Для запуска в llama.cpp и ollama можно воспользоваться нашей квантизованной моделью, которая выложена в репозитории [YandexGPT-5-Lite-8B-instruct-GGUF](https://huggingface.co/yandex/YandexGPT-5-Lite-8B-instruct-GGUF).
## Особенности токенизации
Для полного соответствия токенизации мы рекомендуем пользоваться оригинальным [sentencepiece](https://github.com/google/sentencepiece) — файл токенизатора лежит в папке `original_tokenizer`. В нашей инфраструктуре каждую реплику диалога мы токенизируем отдельно.
Из-за этого, в частности, появляется пробел в начале каждой реплики. Также `\n` токены мы заменяем на `[NL]`, это можно сделать с помощью `text.replace("\n", "[NL]")` перед токенизацией.
## Особенности шаблона
Мы используем нестандартный шаблон диалога — модель обучена генерировать только одну реплику после последовательности `Ассистент:[SEP]`, завершая её токеном `</s>`. При этом диалог в промпте может быть любой длины.
Это приводит к тому, что в интерактивном режиме модель может выдавать результаты, отличающиеся от вызова модели в режиме генерации на фиксированном диалоге. Поэтому мы рекомендуем использовать интерактивный режим только для ознакомления с моделью. |
abrotech/pack_model | abrotech | 2025-03-31T11:23:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T11:23:11Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** abrotech
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
yandex/YandexGPT-5-Lite-8B-pretrain | yandex | 2025-03-31T11:23:25Z | 10,636 | 179 | null | [
"safetensors",
"llama",
"ru",
"en",
"license:other",
"region:us"
]
| null | 2025-02-21T16:46:19Z | ---
license: other
license_name: yandexgpt-5-lite-8b
license_link: LICENSE
language:
- ru
- en
---
# YandexGPT-5-Lite-Pretrain
Pretrain-версия большой языковой модели YandexGPT 5 Lite на 8B параметров с длиной контекста 32k токенов. Обучение модели проходило в два этапа.
На первом этапе модель обучалась преимущественно на русскоязычных и англоязычных текстах общим объёмом 15T токенов с длиной контекста до 8k токенов. Состав датасета: 60% — веб-страницы, 15% — код, 10% — математика, остальное — другие специфичные данные, в том числе сгенерированная с помощью наших моделей синтетика и датасеты наших сервисов, например Яндекс Переводчика и база фактов Поиска.
На втором этапе, который мы назвали Powerup, модель обучалась на высококачественных данных объёмом 320B токенов. Состав Powerup-датасета: 25% — веб-страницы, 19% — математика, 18% — код, 18% — образовательные данные, остальное — синтетика, датасеты сервисов и прочие качественные тексты. На этом этапе мы увеличили длину контекста до 32k токенов.
Кроме того, наш токенизатор хорошо оптимизирован для русского языка. Например, 32k токенов нашей модели в среднем соответствует 48k токенам Qwen-2.5.
Более подробно — в нашей [статье на Хабре](https://habr.com/ru/companies/yandex/articles/885218/).
Задавайте вопросы в discussions.
## Бенчмарки
В своей категории модель достигает паритета с мировыми SOTA по ряду ключевых бенчмарков для pretrain-моделей, а по многим другим — превосходит их:
<img src="https://habrastorage.org/r/w1560/getpro/habr/upload_files/fab/0de/405/fab0de40517e1fd4efc1302eaaf325d8.png" alt="Таблица бенчмарков" width="100%"/>
\* по данным репорта разработчиков модели. <br>
BBH — 3-shot, HUMAN_EVAL и MPBB — 0-shot, все остальные бенчмарки — 5-shot. <br>
Все замеры мы производили в HF transformers.
## Как использовать
Модель можно запустить через HF Transformers:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_NAME = "yandex/YandexGPT-5-Lite-8B-pretrain"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, legacy=False)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
device_map="cuda",
torch_dtype="auto",
)
input_text = "Кто сказал тебе, что нет на свете настоящей,"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=18)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Или через vLLM:
```python
from vllm import LLM, SamplingParams
MODEL_NAME = "yandex/YandexGPT-5-Lite-8B-pretrain"
sampling_params = SamplingParams(
temperature=0.3,
max_tokens=18,
)
llm = LLM(
MODEL_NAME,
tensor_parallel_size=1,
)
input_texts = ["Кто сказал тебе, что нет на свете настоящей,"]
outputs = llm.generate(input_texts, use_tqdm=False, sampling_params=sampling_params)
for i in range(len(input_texts)):
print(input_texts[i] + outputs[i].outputs[0].text)
```
Для полного соответствия токенизации мы рекомендуем пользоваться оригинальным [sentencepiece](https://github.com/google/sentencepiece):
```python
import sentencepiece as spm
import torch
# git clone https://huggingface.co/yandex/YandexGPT-5-Lite-8B-pretrain
tokenizer = spm.SentencePieceProcessor(
model_file="<path_to_local_repo>/tokenizer.model"
)
input_ids = tokenizer.encode(input_text, add_bos=True)
input_ids = torch.Tensor([input_ids]).to(model.device).to(torch.long)
outputs = model.generate(
input_ids=input_ids,
attention_mask=torch.ones_like(input_ids),
max_new_tokens=18
)
print(tokenizer.decode(outputs[0].tolist()))
```
## Как дообучить под свои задачи
У нашей модели llama-like архитектура, это означает, что она совместима с большинством существующих фреймворков по дообучению LLM. Приведем короткий пример, как можно обучить нашу модель в torchtune:
Скачиваем репозиторий:
```bash
tune download yandex/YandexGPT-5-Lite-8B-pretrain \
--output-dir YandexGPT-5-Lite-8B-pretrain
```
Смотрим список конфигов и копируем подходящий под задачу:
```bash
tune ls
tune cp llama3_1/8B_lora training_config.yaml
```
Изменяем конфиг, адаптируем его под нашу модель и задачу. Например, [такой](https://huggingface.co/yandex/YandexGPT-5-Lite-8B-pretrain/discussions/1#67bc4e6472499ce2ba3659a7) вариант подойдет для lora обучения на открытом инстракт датасете `alpaca-cleaned`.
Запускаем обучение:
```bash
tune run lora_finetune_single_device --config training_config.yaml
```
Подробности можно найти в официальной [документации](https://pytorch.org/torchtune/stable/overview.html) torchtune.
|
THP2903/InternVL2_5-1B_multi | THP2903 | 2025-03-31T11:23:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"generated_from_trainer",
"trl",
"sft",
"custom_code",
"base_model:OpenGVLab/InternVL2_5-1B",
"base_model:finetune:OpenGVLab/InternVL2_5-1B",
"region:us"
]
| feature-extraction | 2025-03-31T08:38:40Z | ---
base_model: OpenGVLab/InternVL2_5-1B
library_name: transformers
model_name: InternVL2_5-1B_multi
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for InternVL2_5-1B_multi
This model is a fine-tuned version of [OpenGVLab/InternVL2_5-1B](https://huggingface.co/OpenGVLab/InternVL2_5-1B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="THP2903/InternVL2_5-1B_multi", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phucth290303-pythera/InternVL2_5-1B_multi/runs/mfv4yqv5)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.1
- Pytorch: 2.6.0
- Datasets: 3.4.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Subsets and Splits