modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-24 12:28:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 493
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-24 12:27:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
noneUsername/QwQ-32B-abliterated-AWQ-INT4 | noneUsername | 2025-03-08T06:53:11Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"base_model:huihui-ai/QwQ-32B-abliterated",
"base_model:quantized:huihui-ai/QwQ-32B-abliterated",
"4-bit",
"awq",
"region:us"
] | null | 2025-03-08T06:42:33Z | ---
base_model:
- huihui-ai/QwQ-32B-abliterated
---
vllm (pretrained=/root/autodl-tmp/QwQ-32B,add_bos_token=true,max_model_len=4096,dtype=bfloat16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.432|± |0.0314|
| | |strict-match | 5|exact_match|↑ |0.744|± |0.0277|
vllm (pretrained=/root/autodl-tmp/QwQ-32B,add_bos_token=true,max_model_len=4096,dtype=bfloat16), gen_kwargs: (None), limit: 500.0, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.444|± |0.0222|
| | |strict-match | 5|exact_match|↑ |0.716|± |0.0202|
| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|
|------------------|------:|------|------|------|---|-----:|---|-----:|
|mmlu | 2|none | |acc |↑ |0.8140|± |0.0125|
| - humanities | 2|none | |acc |↑ |0.8359|± |0.0251|
| - other | 2|none | |acc |↑ |0.8103|± |0.0269|
| - social sciences| 2|none | |acc |↑ |0.8889|± |0.0222|
| - stem | 2|none | |acc |↑ |0.7544|± |0.0238|
vllm (pretrained=/root/autodl-tmp/QwQ-32B-abliterated,add_bos_token=true,max_model_len=4096,dtype=bfloat16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.528|± |0.0316|
| | |strict-match | 5|exact_match|↑ |0.740|± |0.0278|
vllm (pretrained=/root/autodl-tmp/QwQ-32B-abliterated,add_bos_token=true,max_model_len=4096,dtype=bfloat16), gen_kwargs: (None), limit: 500.0, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.492|± |0.0224|
| | |strict-match | 5|exact_match|↑ |0.742|± |0.0196|
vllm (pretrained=/root/autodl-tmp/QwQ-32B-abliterated,add_bos_token=true,max_model_len=4096,dtype=bfloat16,max_num_seqs=3), gen_kwargs: (None), limit: 15.0, num_fewshot: None, batch_size: 1
| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|
|------------------|------:|------|------|------|---|-----:|---|-----:|
|mmlu | 2|none | |acc |↑ |0.8152|± |0.0126|
| - humanities | 2|none | |acc |↑ |0.8359|± |0.0253|
| - other | 2|none | |acc |↑ |0.8000|± |0.0276|
| - social sciences| 2|none | |acc |↑ |0.8722|± |0.0240|
| - stem | 2|none | |acc |↑ |0.7754|± |0.0232|
vllm (pretrained=/root/autodl-tmp/QwQ-32B-abliterated-awq,add_bos_token=true,max_model_len=4096,dtype=bfloat16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.476|± |0.0316|
| | |strict-match | 5|exact_match|↑ |0.752|± |0.0274|
vllm (pretrained=/root/autodl-tmp/QwQ-32B-abliterated-awq,add_bos_token=true,max_model_len=4096,dtype=bfloat16), gen_kwargs: (None), limit: 500.0, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.524|± |0.0224|
| | |strict-match | 5|exact_match|↑ |0.716|± |0.0202|
| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|
|------------------|------:|------|------|------|---|-----:|---|-----:|
|mmlu | 2|none | |acc |↑ |0.8023|± |0.0130|
| - humanities | 2|none | |acc |↑ |0.8000|± |0.0266|
| - other | 2|none | |acc |↑ |0.7949|± |0.0284|
| - social sciences| 2|none | |acc |↑ |0.8500|± |0.0258|
| - stem | 2|none | |acc |↑ |0.7789|± |0.0235| |
huangyizhuo/distilbert-base-uncased-finetuned-emotion | huangyizhuo | 2025-03-08T06:52:58Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-02-16T01:46:46Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2146
- Accuracy: 0.9265
- F1: 0.9263
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8268 | 1.0 | 250 | 0.3062 | 0.9095 | 0.9086 |
| 0.2478 | 2.0 | 500 | 0.2146 | 0.9265 | 0.9263 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.6.0
- Datasets 3.2.0
- Tokenizers 0.21.0
|
andaole/ppo-Huggy | andaole | 2025-03-08T06:51:49Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2025-03-08T06:51:36Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: andaole/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mradermacher/Hush-Qwen2.5-7B-RP-i1-GGUF | mradermacher | 2025-03-08T06:50:28Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:marcuscedricridia/Hush-Qwen2.5-7B-RP",
"base_model:quantized:marcuscedricridia/Hush-Qwen2.5-7B-RP",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-03-08T02:50:15Z | ---
base_model: marcuscedricridia/Hush-Qwen2.5-7B-RP
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/marcuscedricridia/Hush-Qwen2.5-7B-RP
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Jonjew/TomNulensStyle | Jonjew | 2025-03-08T06:48:28Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] | text-to-image | 2025-03-08T06:48:20Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
By Tom Nulens. A digital illustration shoot from a profile camera angle
about a double exposure portrait of a woman with abstract technology
elements and text. the image also shows a woman in the middle of the image,
who appears to be a young woman with dark hair styled in a bun, and is
facing the viewer with her eyes closed. she has a serene expression and is
wearing a beige cape that is partially open, revealing her upper body. the
background is a gradient of beige and black, with a mix of light and dark
tones. on the left side of the woman, there is a text overlay that reads
"o.e.t." and on the right side, there are various mathematical equations and
symbols that appear to be made up of black and gold elements. the woman's
hair is styled in an updo, and her face is adorned with gold and black
geometric patterns. the overall effect is a striking digital art piece with
a focus on technology and abstract elements.
<lora:Tom_Nulens:0.8045440673828125>
parameters:
negative_prompt: 'Steps: 25 Seed: 36692865603449'
output:
url: images/_00000_6_.png
- text: >-
By Tom Nulens. A digitally created artwork in a modern, abstract style. The
central focus is a stylized, surreal portrait of a woman's face, divided
into two halves. The left half is a realistic depiction of a woman's face
with detailed features and a neutral expression, framed by golden, metallic
feathers and geometric shapes. The right half is more abstract, featuring a
blend of metallic textures, organic forms, and geometric patterns. The
woman's right eye is partially obscured by a large, metallic, swirling
pattern that resembles a vortex or a galaxy. Her hair is intricately
designed, with feathers and metallic elements interwoven, adding to the
surreal and futuristic aesthetic. The background is a gradient of deep, dark
colors, transitioning from black to a rich, dark red, which enhances the
golden and metallic tones of the artwork. The overall composition is
symmetrical, with the left and right halves mirroring each other, creating a
sense of balance and harmony. The textures and colors are highly detailed
and rich, giving the artwork a luxurious and opulent feel. The style is
reminiscent of high-end fashion photography or digital art, blending realism
with abstract elements. <lora:Tom_Nulens:0.8045440673828125>
parameters:
negative_prompt: 'Steps: 25 Seed: 680769812145859'
output:
url: images/_00000_5_.png
- text: >-
By Tom Nulens. A photo-realistic shoot from a frontal camera angle about a
woman with a mysterious expression surrounded by white butterflies. the
image also shows a dark, moody atmosphere. on the middle of the image, a
woman appears to be in her mid-twenties, with a slim body and dark lipstick.
she is facing the viewer, with her eyes looking directly at the viewer. her
hair is styled in a way that frames her face, and her hair color is black.
her face is covered by multiple white butterflies that are flying around her
head, creating an ethereal and surreal atmosphere. the butterflies are of
various sizes and colors, adding to the sense of depth and dimensionality in
the image. the background is a solid black color, providing a stark contrast
to the woman's pale skin. the overall effect is one of beauty and mystery,
with the butterflies adding a touch of whimsy and enchantment.
<lora:Tom_Nulens:0.8045440673828125>
parameters:
negative_prompt: 'Steps: 25 Seed: 345956540005846'
output:
url: images/_00000_10_.png
- text: >-
By Tom Nulens. A photo-realistic portrait shoot from a frontal camera angle
about a stylized portrait of a woman with a unique headpiece adorned with
yellow orchids and foliage. the image also shows a striking contrast between
the two colors. on the middle of the image, a woman appears to be facing the
viewer, with her eyes closed, wearing a sleeveless yellow dress that is cut
in half, revealing her bare shoulders and black lips. she has a slim body
and a bald head, and her face is painted with a combination of white and
orange colors. her hair is adorned with large, yellow flowers and orange
flowers, and she is wearing a feathered headpiece. the background is a
solid, light green color, and the overall aesthetic is minimalistic with a
focus on the woman's face and hair. <lora:Tom_Nulens:0.8045440673828125>
parameters:
negative_prompt: 'Steps: 25 Seed: 775346785547268'
output:
url: images/_00000_7_.png
- text: >-
By Tom Nulens. A digital artwork featuring a stylized portrait of a woman.
The background is a muted, gradient gray that gradually fades from light to
dark, providing a subtle contrast to the vibrant colors and textures of the
subject. The woman's face is partially obscured by a gold, metallic mask
that covers her eyes and part of her nose, giving her an enigmatic and
mysterious appearance. Her lips are painted a deep red, adding a touch of
drama to her expression. Her hair is styled extravagantly, with a large,
ornate headdress that combines elements of feathers, sequins, and metallic
textures. The feathers are predominantly black, adding a sense of elegance
and grandeur. The headdress is intertwined with golden and metallic
elements, creating a striking contrast against her pale skin. The woman's
shoulders and upper body are adorned with a mix of shimmering gold and black
fabrics, with a textured, almost armor-like quality. The gold elements have
a reflective surface, adding depth and dimension to the artwork. The overall
style of the artwork is contemporary and avant-garde, blending elements of
high fashion and digital art to create a visually striking and dynamic
composition. <lora:Tom_Nulens:0.8045440673828125>
parameters:
negative_prompt: 'Steps: 25 Seed: 162846266189991'
output:
url: images/_00000_12_.png
- text: >-
By Tom Nulens. A highly stylized CGI digital artwork featuring a woman with
a pale, almost translucent complexion. Her hair is platinum blonde, styled
in a sleek, straight manner that frames her face. The central focus of the
image is her face and upper torso, which are intricately adorned with
abstract, metallic gold shapes and fragments that appear to be breaking away
from her skin. These gold fragments form letters, numbers, and abstract
forms, creating a dynamic and fragmented visual effect. The gold fragments
are shiny and reflective, contrasting starkly with her pale skin and the
white background. The woman's expression is calm and serene, with her eyes
closed and her lips slightly parted, adding to the ethereal and almost
otherworldly feel of the image. The background is a smooth, light gray,
which helps emphasize the stark contrast between the woman's skin and the
gold fragments. The texture of her skin is smooth and delicate, while the
gold fragments have a rough, jagged texture, enhancing the sense of
fragmentation and disintegration. The overall style of the artwork is highly
conceptual and abstract, blending elements of modern art and digital
manipulation to create a surreal and visually striking image.
<lora:Tom_Nulens:0.8045440673828125>
parameters:
negative_prompt: 'Steps: 25 Seed: 1124374567933754'
output:
url: images/_00000_15_.png
- text: >-
By Tom Nulens. A digitally manipulated photograph of a young woman with a
striking, avant-garde style. The woman has platinum blonde hair styled into
a voluminous, spiky updo with various splashes of black paint and gold
splatters, creating a dramatic, almost abstract effect. Her face is
partially obscured by the large, bold number "2" in a metallic gold color,
which is overlaid on her forehead and hair, giving a futuristic and modern
appearance. She is dressed in a dark, metallic leather jacket with a high
collar, adding to the edgy, futuristic aesthetic. The background is a
gradient of dark gray tones, which helps to emphasize the gold splatters and
the subject's face. The overall style is modern and artistic, blending
elements of high fashion and digital manipulation. The texture of the
woman's hair is smooth and glossy, contrasting with the rough, splattered
paint. The image is highly stylized, with a focus on bold, contrasting
colors and textures, creating a visually striking and dynamic composition.
The overall mood is modern and edgy, with a strong emphasis on fashion and
artistic expression. <lora:Tom_Nulens:0.8045440673828125>
parameters:
negative_prompt: 'Steps: 25 Seed: 404399648602098'
output:
url: images/_00000_23_.png
- text: >-
By Tom Nulens. A digital illustration shoot from a profile camera angle
about a striking portrait of a woman with a unique, abstract design. the
image features a woman in the middle of the frame, with her face facing the
viewer. she appears to be a young adult, with fair skin and long, dark
eyelashes. her hair is styled in a mohawk-like fashion, with black feathers
framing her face. her eyes are accentuated with dramatic black eyeliner, and
her lips are painted with dark lipstick. the background is a beige, textured
surface with various pieces of paper and abstract shapes scattered
throughout, creating a collage-like effect. the woman's face is the focal
point of the image, with the abstract shapes and textures blending together
to create a sense of depth and dimension. the overall effect is a striking
and captivating piece of art that is both visually striking and
eye-catching. <lora:Tom_Nulens_II-000007:0.8045440673828125>
parameters:
negative_prompt: 'Steps: 25 Seed: 724610367327067'
output:
url: images/_00000_28_.png
- text: >-
A digital illustration shoot from a profile camera angle about a striking
portrait of a woman with a unique, abstract design. the image features a
woman in the middle of the frame, with her face facing the viewer. she
appears to be a young adult, with fair skin and long, dark eyelashes. her
hair is styled in a mohawk-like fashion, with black feathers framing her
face. her eyes are accentuated with dramatic black eyeliner, and her lips
are painted with dark lipstick. the background is a beige, textured surface
with various pieces of paper and abstract shapes scattered throughout,
creating a collage-like effect. the woman's face is the focal point of the
image, with the abstract shapes and textures blending together to create a
sense of depth and dimension. the overall effect is a striking and
captivating piece of art that is both visually striking and eye-catching.
<lora:Tom_Nulens:0.8045440673828125> <lora:Steve_McDonald:0.75>
<lora:Fluxartis_Photography:0.6>
parameters:
negative_prompt: 'Steps: 25 Seed: 640717374257156'
output:
url: images/_00000_36_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: By Tom Nulens
license: unknown
---
# Tom Nulens
<Gallery />
## Model description
FROM https://civitai.com/models/1287601/tom-nulens?modelVersionId=1265556
Trigger By Tom Nulens
Strength 0.85
25 - 30 steps
cfg 3.5
About this version
Inspired by Tom Nulens artwork.
Tom Nulens is a Visual Designer and Art Director who leverages generative AI to craft imaginative campaigns and captivating visuals. Combining creativity with cutting-edge technology, Tom’s work blends striking aesthetics, polished details, and engaging storytelling to create impactful designs tailored to modern audiences.
Recommended resources : Fluxmania III or Flux1.dev fp8.
Settings : dpmpp_2m / sgm_uniform / 25 - 30 steps / cfg 3.5
Weighting : 0.85
## Trigger words
You should use `By Tom Nulens` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/TomNulensStyle/tree/main) them in the Files & versions tab.
|
linkyfan/Qwen2.5-3b-GPRO | linkyfan | 2025-03-08T06:46:47Z | 76 | 1 | transformers | [
"transformers",
"pytorch",
"qwen2",
"text-generation",
"unsloth",
"trl",
"grpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-03T00:47:06Z | ---
library_name: transformers
tags:
- unsloth
- trl
- grpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cavargas10/TRELLIS | cavargas10 | 2025-03-08T06:45:03Z | 0 | 0 | trellis | [
"trellis",
"image-to-3d",
"en",
"arxiv:2412.01506",
"license:mit",
"region:us"
] | image-to-3d | 2025-03-08T06:03:39Z | ---
library_name: trellis
pipeline_tag: image-to-3d
license: mit
language:
- en
---
# TRELLIS Image Large
<!-- Provide a quick summary of what the model is/does. -->
The image conditioned version of TRELLIS, a large 3D genetive model. It was introduced in the paper [Structured 3D Latents for Scalable and Versatile 3D Generation](https://huggingface.co/papers/2412.01506).
Project page: https://trellis3d.github.io/
Code: https://github.com/Microsoft/TRELLIS
|
YxBxRyXJx/Unsloth_QADS_ORPO_DeepseekQwen_14B_no1 | YxBxRyXJx | 2025-03-08T06:44:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-08T06:44:17Z | ---
base_model: unsloth/deepseek-r1-distill-qwen-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** YxBxRyXJx
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-14b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
NC122/Llama-3.2-1B-finetuned | NC122 | 2025-03-08T06:38:59Z | 0 | 0 | null | [
"safetensors",
"llama",
"trl",
"sft",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-08T06:00:23Z | ---
license: apache-2.0
tags:
- trl
- sft
---
|
Sophie-Rain-Spider-man-Leaks-Videos/Sophie.Rain.Spiderman.Videos.Instagram | Sophie-Rain-Spider-man-Leaks-Videos | 2025-03-08T06:37:05Z | 0 | 0 | null | [
"region:us"
] | null | 2025-03-08T06:36:48Z | <p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p> |
Sophie-Rain-Spiderman-Leak-Videos-Free/Sophie.Rain.SpiderMan.Video.Tutorial | Sophie-Rain-Spiderman-Leak-Videos-Free | 2025-03-08T06:36:32Z | 0 | 0 | null | [
"region:us"
] | null | 2025-03-08T06:36:08Z | There has been a lot of buzz on the internet recently regarding a alleged video scandal involving Sophie Rain and Spider-Man.
<p><a href="https://link.rmg.co.uk/nude?updates" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://link.rmg.co.uk/nude?updates" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://link.rmg.co.uk/nude?updates" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p> |
mrpks/bert-finetuned-cptindex | mrpks | 2025-03-08T06:35:39Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:dmis-lab/biobert-v1.1",
"base_model:finetune:dmis-lab/biobert-v1.1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-03-08T06:24:16Z | ---
library_name: transformers
base_model: dmis-lab/biobert-v1.1
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-cptindex
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-cptindex
This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2067
- Precision: 0.7222
- Recall: 0.7879
- F1: 0.7536
- Accuracy: 0.9291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 25 | 0.4153 | 0.75 | 0.8182 | 0.7826 | 0.8333 |
| No log | 2.0 | 50 | 0.2187 | 0.6486 | 0.7273 | 0.6857 | 0.9255 |
| No log | 3.0 | 75 | 0.2067 | 0.7222 | 0.7879 | 0.7536 | 0.9291 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
mradermacher/QWEN-Instruct-32B-Token-GGUF | mradermacher | 2025-03-08T06:33:24Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Viol2000/QWEN-Instruct-32B-Token",
"base_model:quantized:Viol2000/QWEN-Instruct-32B-Token",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-08T05:12:06Z | ---
base_model: Viol2000/QWEN-Instruct-32B-Token
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Viol2000/QWEN-Instruct-32B-Token
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QWEN-Instruct-32B-Token-GGUF/resolve/main/QWEN-Instruct-32B-Token.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/QWEN-Instruct-32B-Token-GGUF/resolve/main/QWEN-Instruct-32B-Token.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/QWEN-Instruct-32B-Token-GGUF/resolve/main/QWEN-Instruct-32B-Token.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/QWEN-Instruct-32B-Token-GGUF/resolve/main/QWEN-Instruct-32B-Token.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/QWEN-Instruct-32B-Token-GGUF/resolve/main/QWEN-Instruct-32B-Token.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/QWEN-Instruct-32B-Token-GGUF/resolve/main/QWEN-Instruct-32B-Token.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QWEN-Instruct-32B-Token-GGUF/resolve/main/QWEN-Instruct-32B-Token.Q4_K_M.gguf) | Q4_K_M | 19.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QWEN-Instruct-32B-Token-GGUF/resolve/main/QWEN-Instruct-32B-Token.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/QWEN-Instruct-32B-Token-GGUF/resolve/main/QWEN-Instruct-32B-Token.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/QWEN-Instruct-32B-Token-GGUF/resolve/main/QWEN-Instruct-32B-Token.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/QWEN-Instruct-32B-Token-GGUF/resolve/main/QWEN-Instruct-32B-Token.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/llama-2-7b-monika-v0.3b-GGUF | mradermacher | 2025-03-08T06:33:23Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:922-CA/llama-2-7b-monika-v0.3b",
"base_model:quantized:922-CA/llama-2-7b-monika-v0.3b",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2025-03-08T05:46:02Z | ---
base_model: 922-CA/llama-2-7b-monika-v0.3b
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/922-CA/llama-2-7b-monika-v0.3b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/llama-2-7b-monika-v0.3b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-monika-v0.3b-GGUF/resolve/main/llama-2-7b-monika-v0.3b.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-monika-v0.3b-GGUF/resolve/main/llama-2-7b-monika-v0.3b.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-monika-v0.3b-GGUF/resolve/main/llama-2-7b-monika-v0.3b.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-monika-v0.3b-GGUF/resolve/main/llama-2-7b-monika-v0.3b.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-monika-v0.3b-GGUF/resolve/main/llama-2-7b-monika-v0.3b.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-monika-v0.3b-GGUF/resolve/main/llama-2-7b-monika-v0.3b.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-monika-v0.3b-GGUF/resolve/main/llama-2-7b-monika-v0.3b.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-monika-v0.3b-GGUF/resolve/main/llama-2-7b-monika-v0.3b.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-monika-v0.3b-GGUF/resolve/main/llama-2-7b-monika-v0.3b.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-monika-v0.3b-GGUF/resolve/main/llama-2-7b-monika-v0.3b.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-monika-v0.3b-GGUF/resolve/main/llama-2-7b-monika-v0.3b.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-monika-v0.3b-GGUF/resolve/main/llama-2-7b-monika-v0.3b.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ClarenceDan/007022d0-61ef-4fad-bfe4-07af38a01863 | ClarenceDan | 2025-03-08T06:33:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Capybara-7B-V1",
"base_model:adapter:NousResearch/Nous-Capybara-7B-V1",
"license:mit",
"region:us"
] | null | 2025-03-08T05:21:23Z | ---
library_name: peft
license: mit
base_model: NousResearch/Nous-Capybara-7B-V1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 007022d0-61ef-4fad-bfe4-07af38a01863
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Capybara-7B-V1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c11b9ad10f2938a4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c11b9ad10f2938a4_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/007022d0-61ef-4fad-bfe4-07af38a01863
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/c11b9ad10f2938a4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0d088cfb-e5cd-488c-aa02-38d2d25523be
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0d088cfb-e5cd-488c-aa02-38d2d25523be
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 007022d0-61ef-4fad-bfe4-07af38a01863
This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0003 | 3 | nan |
| 0.0 | 0.0005 | 6 | nan |
| 0.0 | 0.0008 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
genki10/ASAP_nosemanticV2_FineTuningBERT_AugV12_k7_task1_organization_k7_k7_fold2 | genki10 | 2025-03-08T06:31:54Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-08T06:10:48Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_nosemanticV2_FineTuningBERT_AugV12_k7_task1_organization_k7_k7_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_nosemanticV2_FineTuningBERT_AugV12_k7_task1_organization_k7_k7_fold2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9735
- Qwk: 0.3200
- Mse: 0.9733
- Rmse: 0.9866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 6 | 5.8878 | 0.0080 | 5.8881 | 2.4265 |
| No log | 2.0 | 12 | 2.9245 | 0.0 | 2.9248 | 1.7102 |
| No log | 3.0 | 18 | 1.6194 | 0.0213 | 1.6198 | 1.2727 |
| No log | 4.0 | 24 | 1.7325 | 0.0345 | 1.7328 | 1.3164 |
| No log | 5.0 | 30 | 1.4855 | 0.0823 | 1.4857 | 1.2189 |
| No log | 6.0 | 36 | 1.1889 | 0.1651 | 1.1887 | 1.0903 |
| No log | 7.0 | 42 | 1.1280 | 0.1929 | 1.1275 | 1.0619 |
| No log | 8.0 | 48 | 0.9689 | 0.2134 | 0.9685 | 0.9841 |
| No log | 9.0 | 54 | 1.4272 | 0.1832 | 1.4264 | 1.1943 |
| No log | 10.0 | 60 | 1.2315 | 0.2411 | 1.2308 | 1.1094 |
| No log | 11.0 | 66 | 1.6767 | 0.1943 | 1.6763 | 1.2947 |
| No log | 12.0 | 72 | 1.3642 | 0.1917 | 1.3640 | 1.1679 |
| No log | 13.0 | 78 | 1.5235 | 0.1944 | 1.5231 | 1.2341 |
| No log | 14.0 | 84 | 2.0727 | 0.1451 | 2.0725 | 1.4396 |
| No log | 15.0 | 90 | 1.5991 | 0.2128 | 1.5988 | 1.2644 |
| No log | 16.0 | 96 | 1.6541 | 0.1702 | 1.6538 | 1.2860 |
| No log | 17.0 | 102 | 1.4618 | 0.1787 | 1.4617 | 1.2090 |
| No log | 18.0 | 108 | 0.8675 | 0.3440 | 0.8670 | 0.9311 |
| No log | 19.0 | 114 | 1.1022 | 0.2873 | 1.1020 | 1.0498 |
| No log | 20.0 | 120 | 2.1904 | 0.1442 | 2.1905 | 1.4800 |
| No log | 21.0 | 126 | 1.6390 | 0.1768 | 1.6390 | 1.2802 |
| No log | 22.0 | 132 | 0.9015 | 0.3012 | 0.9012 | 0.9493 |
| No log | 23.0 | 138 | 1.1640 | 0.2156 | 1.1638 | 1.0788 |
| No log | 24.0 | 144 | 1.4515 | 0.1902 | 1.4514 | 1.2047 |
| No log | 25.0 | 150 | 1.6886 | 0.1810 | 1.6884 | 1.2994 |
| No log | 26.0 | 156 | 0.9759 | 0.2685 | 0.9757 | 0.9878 |
| No log | 27.0 | 162 | 0.9699 | 0.3298 | 0.9696 | 0.9847 |
| No log | 28.0 | 168 | 1.1190 | 0.2820 | 1.1188 | 1.0577 |
| No log | 29.0 | 174 | 1.3450 | 0.2003 | 1.3449 | 1.1597 |
| No log | 30.0 | 180 | 1.0749 | 0.2609 | 1.0746 | 1.0366 |
| No log | 31.0 | 186 | 1.0030 | 0.2746 | 1.0027 | 1.0014 |
| No log | 32.0 | 192 | 1.0923 | 0.2350 | 1.0918 | 1.0449 |
| No log | 33.0 | 198 | 1.0537 | 0.2439 | 1.0535 | 1.0264 |
| No log | 34.0 | 204 | 1.1813 | 0.2531 | 1.1811 | 1.0868 |
| No log | 35.0 | 210 | 0.9487 | 0.3130 | 0.9485 | 0.9739 |
| No log | 36.0 | 216 | 1.0430 | 0.2795 | 1.0427 | 1.0211 |
| No log | 37.0 | 222 | 0.9893 | 0.2980 | 0.9891 | 0.9946 |
| No log | 38.0 | 228 | 0.9735 | 0.3200 | 0.9733 | 0.9866 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
wATCH-Sophie-Rain-Spider-man-leaks-Video/Sophie.Rain.Videos.Link.Short.Clip.Video.Viral.On.Social.Media.X.Twitter | wATCH-Sophie-Rain-Spider-man-leaks-Video | 2025-03-08T06:31:37Z | 0 | 0 | null | [
"region:us"
] | null | 2025-03-08T06:31:14Z | <p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p> |
wATCH-Sophie-Rain-Spider-Updates-Video/Sophie.Rain.Spider-Man.Video.Tutorial | wATCH-Sophie-Rain-Spider-Updates-Video | 2025-03-08T06:30:36Z | 0 | 0 | null | [
"region:us"
] | null | 2025-03-08T06:26:55Z | Sophie Rain Spider-Man Video Tutorial In recent weeks, a video of Sophie Rain, a little-known social media personality, has gone viral. The video, which is heavily implied to be of an explicit nature, has sparked a significant amount of controversy and debate online.
<p><a href="https://link.rmg.co.uk/nude?updates" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://link.rmg.co.uk/nude?updates" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://link.rmg.co.uk/nude?updates" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p> |
FearedSlug7/so-vits-svc-Ahoy-Stuart-Brown | FearedSlug7 | 2025-03-08T06:29:29Z | 0 | 0 | null | [
"region:us"
] | null | 2025-03-08T06:26:57Z | Here is Ahoy, or Stuart Brown.
All samples were taken from his youtube: https://www.youtube.com/@XboxAhoy |
Wan-Sheng/Llama-3.2-1B-finetuned | Wan-Sheng | 2025-03-08T06:27:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-03-08T06:26:04Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
texanrangee/40ffc4c3-0a47-4aa7-9175-c431f61f9e64 | texanrangee | 2025-03-08T06:25:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-08T02:19:26Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Sophie-Rain-Spiderman-New-Video-Fre/Sophie.Rain.Spiderman.Video.Viral | Sophie-Rain-Spiderman-New-Video-Fre | 2025-03-08T06:25:00Z | 0 | 0 | null | [
"region:us"
] | null | 2025-03-08T06:24:45Z | <p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p> |
lesso09/433049d9-a061-45fc-83f8-db22e048bd39 | lesso09 | 2025-03-08T06:24:51Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2",
"base_model:adapter:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2",
"license:gemma",
"region:us"
] | null | 2025-03-08T03:13:36Z | ---
library_name: peft
license: gemma
base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 433049d9-a061-45fc-83f8-db22e048bd39
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d5a0ca1fecef3b00_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d5a0ca1fecef3b00_train_data.json
type:
field_input: privacy_mask
field_instruction: masked_text
field_output: unmasked_text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso09/433049d9-a061-45fc-83f8-db22e048bd39
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000209
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/d5a0ca1fecef3b00_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 90
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 426e9c82-b359-492a-b373-0f196a20e3b0
wandb_project: 09a
wandb_run: your_name
wandb_runid: 426e9c82-b359-492a-b373-0f196a20e3b0
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 433049d9-a061-45fc-83f8-db22e048bd39
This model is a fine-tuned version of [UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000209
- train_batch_size: 4
- eval_batch_size: 4
- seed: 90
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 0.5465 |
| 0.0009 | 0.0809 | 500 | 0.0008 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Columbidae/Qwen-27B-Pruned-Retrained | Columbidae | 2025-03-08T06:22:46Z | 12 | 0 | null | [
"safetensors",
"qwen2",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"region:us"
] | null | 2025-02-18T03:16:36Z | ---
base_model:
- Qwen/Qwen2.5-32B-Instruct
---
# Pruned Qwen (Epoch 1)
This is [ToastyPigeon/qwen2.5-32b-unnamed-test-model](https://huggingface.co/ToastyPigeon/qwen2.5-32b-unnamed-test-model) pruned down from 32b -> 27b.
Using [PruneMe](https://github.com/arcee-ai/PruneMe) to find layers to remove resulted in the removal of layers `[25, 29)` and `[36, 43)` for a reduction from 64 -> 52 layers.
Trained on 1 epoch of mixed data from the datasets that went into the pre-pruned model (I'll document that later), totaling about ~10M tokens so far of retraining.
Coherent but a little dumb. Likely needs more than 10M tokens of retraining to re-align the layers. |
NC122/couplet-json | NC122 | 2025-03-08T06:22:28Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-03-08T06:22:28Z | ---
license: apache-2.0
---
|
lesso07/f7c2c1a2-4569-4418-8a9c-89a07eba80ab | lesso07 | 2025-03-08T06:21:33Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:deepseek-ai/deepseek-coder-6.7b-instruct",
"base_model:adapter:deepseek-ai/deepseek-coder-6.7b-instruct",
"license:other",
"region:us"
] | null | 2025-03-08T03:56:44Z | ---
library_name: peft
license: other
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f7c2c1a2-4569-4418-8a9c-89a07eba80ab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a66c6e85b5025d0e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a66c6e85b5025d0e_train_data.json
type:
field_input: text
field_instruction: title_main
field_output: html
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso07/f7c2c1a2-4569-4418-8a9c-89a07eba80ab
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000207
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/a66c6e85b5025d0e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 70
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 712a841b-9a3a-4dfc-8c94-c6ef8d4f9f1e
wandb_project: 07a
wandb_run: your_name
wandb_runid: 712a841b-9a3a-4dfc-8c94-c6ef8d4f9f1e
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f7c2c1a2-4569-4418-8a9c-89a07eba80ab
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000207
- train_batch_size: 4
- eval_batch_size: 4
- seed: 70
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 0.4302 |
| 0.0524 | 0.1042 | 500 | 0.0509 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
wATCH-Sophie-Rain-Spiderman-Free-New-Video/Sophie.Rain.Spider-Man.Video.Tutorial | wATCH-Sophie-Rain-Spiderman-Free-New-Video | 2025-03-08T06:21:18Z | 0 | 0 | null | [
"region:us"
] | null | 2025-03-08T06:20:41Z | Sophie Rain SpiderMan Video Tutorial
<p><a href="https://link.rmg.co.uk/nude?updates" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://link.rmg.co.uk/nude?updates" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://link.rmg.co.uk/nude?updates" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p> |
lesso15/1c7a6067-30ea-4d89-8411-981ee8038bfb | lesso15 | 2025-03-08T06:21:01Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:deepseek-ai/deepseek-coder-6.7b-instruct",
"base_model:adapter:deepseek-ai/deepseek-coder-6.7b-instruct",
"license:other",
"region:us"
] | null | 2025-03-08T03:56:59Z | ---
library_name: peft
license: other
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1c7a6067-30ea-4d89-8411-981ee8038bfb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a66c6e85b5025d0e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a66c6e85b5025d0e_train_data.json
type:
field_input: text
field_instruction: title_main
field_output: html
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso15/1c7a6067-30ea-4d89-8411-981ee8038bfb
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000215
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/a66c6e85b5025d0e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 150
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 712a841b-9a3a-4dfc-8c94-c6ef8d4f9f1e
wandb_project: 15a
wandb_run: your_name
wandb_runid: 712a841b-9a3a-4dfc-8c94-c6ef8d4f9f1e
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1c7a6067-30ea-4d89-8411-981ee8038bfb
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000215
- train_batch_size: 4
- eval_batch_size: 4
- seed: 150
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 0.4299 |
| 0.0527 | 0.1042 | 500 | 0.0512 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
davidwu1991/gemma-2-2B-it-thinking-function_calling-V0 | davidwu1991 | 2025-03-08T06:19:57Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-03-08T06:17:03Z | ---
base_model: google/gemma-2-2b-it
library_name: transformers
model_name: gemma-2-2B-it-thinking-function_calling-V0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-2-2B-it-thinking-function_calling-V0
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="davidwu1991/gemma-2-2B-it-thinking-function_calling-V0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.47.1
- Pytorch: 2.6.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ProperKetoCapsules779/ProperKetoCapsules | ProperKetoCapsules779 | 2025-03-08T06:18:14Z | 0 | 0 | null | [
"region:us"
] | null | 2025-03-08T06:15:55Z | Proper Keto Capsules France : Les capsules Proper Keto sont un complément alimentaire de pointe conçu pour soutenir l'état naturel de cétose de votre corps. Fabriquées à partir d’un mélange d’ingrédients de haute qualité et scientifiquement prouvés, ces capsules sont conçues pour vous aider à atteindre et à maintenir les nombreux avantages d’un mode de vie cétogène.
## **[Cliquez ici pour commander sur le site officiel de Proper](https://properketocapsules.fr)**
Vous souhaitez essayer les capsules Proper Keto mais vous ne savez pas si elles sont la meilleure option pour vous ? Pour bien comprendre ce que ce supplément offre, il est important de l’examiner attentivement avant de prendre une décision.
Pour déterminer si les capsules Proper Keto sont à la hauteur du battage médiatique et valent votre argent, nous examinerons chaque facette du produit dans cette revue. Nous discutons de tout : ingrédients, efficacité, avantages possibles et expériences des utilisateurs. Par conséquent, lisez la suite pour obtenir des informations pertinentes avant d’acheter et décidez si les capsules Proper Keto sont la meilleure option pour vous.
## Que sont les capsules Keto appropriées ?
Un type spécial de complément alimentaire appelé Proper Keto Capsules est conçu pour aider les gens à atteindre leurs objectifs de perte de poids en favorisant l'état de cétose. En cas de cétose, le corps utilise les graisses stockées au lieu des glucides comme principale source d’énergie dans des conditions métaboliques. Les ingrédients de ces capsules sont conçus pour aider le corps à entrer et à rester en cétose avec plus de succès.
La prise de capsules Proper Keto entraîne l’absorption de cétones exogènes dans le corps. Ces cétones servent à stimuler la production normale de cétones par le corps pendant les périodes de restriction glucidique. Les bonnes capsules céto soutiennent le changement métabolique vers l’utilisation des graisses comme carburant en augmentant la disponibilité des cétones dans la circulation sanguine.
Le bêta-hydroxybutyrate (BHB), une cétone exogène, est l’ingrédient principal des capsules Proper Keto. Le BHB fournit au corps une source d’énergie alternative à partir des réserves de graisse, permettant au corps d’entrer plus rapidement en cétose. Les capsules cétogènes appropriées peuvent également contenir des composants nutritionnels supplémentaires tels que des triglycérides à chaîne moyenne (TCM) et des électrolytes pour faciliter la transition vers la cétose et réduire les effets secondaires potentiels.
Lorsque quelqu’un souhaite utiliser les capsules Proper Keto, il les combine généralement avec un régime cétogène dans sa routine quotidienne. Un faible apport en glucides, une consommation modérée en protéines et un apport élevé en graisses sont les caractéristiques d’un régime cétogène. En limitant l’apport en glucides, le corps est obligé de recourir davantage aux graisses pour produire de l’énergie. Associé aux bonnes capsules céto, cela aide le corps à entrer en cétose plus efficacement.
## **[Cliquez ici pour commander sur le site officiel de Proper](https://properketocapsules.fr)**
## Les capsules Keto appropriées sont-elles naturelles et sûres ? - Que contiennent les capsules Proper Keto ?
Les capsules Proper Keto sont fabriquées en mélangeant des composants naturels et sûrs soigneusement sélectionnés pour favoriser la perte de poids et le bien-être général. Les capsules cétogènes appropriées contiennent plusieurs ingrédients importants, notamment :
**poudre de vinaigre de cidre de pomme**
L'acide acétique présent dans la poudre de vinaigre de cidre de pomme et fabriqué à partir de pommes fermentées est connu pour ses bienfaits potentiels pour la santé et peut aider à la perte de poids en augmentant la satiété et en améliorant la digestion.
## Poudre de triglycérides à chaîne moyenne (MCT) végétalienne
Parce que les MCT sont un type de graisse qui est facilement absorbé et converti en cétones, ils constituent un excellent complément à un régime cétogène. La poudre MCT végétalienne fournit un regain d'énergie constant et peut soutenir la cétose.
**vitamine E**
La vitamine E favorise la santé et le bien-être général en agissant comme antioxydant et en protégeant les cellules des dommages causés par les radicaux libres.
**dioxyde de silicium**
Le dioxyde de silicium, souvent utilisé comme agent de séparation, assure une répartition uniforme des composants de la capsule et aide à prévenir l'agglutination.
**carbonate de calcium**
Le carbonate de calcium est une source de calcium, un minéral important pour la santé des os, des muscles et de la transmission nerveuse.
**vitamine C**
La vitamine C antioxydante soutient la production de collagène, renforce le système immunitaire et favorise l'absorption du fer.
**chlorure de potassium**
Un électrolyte appelé chlorure de potassium aide à réguler l’équilibre hydrique du corps, les contractions musculaires et les impulsions nerveuses.
**oxyde de magnésium**
L'oxyde de magnésium est une source de magnésium. Le corps utilise le magnésium pour plus de 300 activités métaboliques, notamment la production d’énergie, la contraction musculaire et la transmission des signaux nerveux.
**zinc**
Le zinc est un minéral essentiel qui soutient la synthèse de l’ADN, la cicatrisation des plaies et l’activité du système immunitaire.
**vitamines A, B12 et D**
Ces vitamines soutiennent l’organisme dans la formation des globules rouges, l’absorption du calcium et la vision, entre autres.
La sécurité et l’efficacité de chaque ingrédient ont été soigneusement étudiées et les capsules Proper Keto sont fabriquées selon des directives de contrôle qualité strictes pour garantir la pureté et la puissance. Contenant des ingrédients de première qualité sans effets secondaires connus, les ingrédients naturels et sûrs des capsules Proper Keto offrent aux utilisateurs une tranquillité d'esprit tout en les aidant dans leurs efforts de perte de poids.
## **[Cliquez ici pour commander sur le site officiel de Proper](https://properketocapsules.fr)**
|
jkaunert/smolvlm-instruct-spider-classifier | jkaunert | 2025-03-08T06:15:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"idefics3",
"image-text-to-text",
"conversational",
"en",
"dataset:jkaunert/spider_dataset_dorsal_view",
"dataset:jkaunert/spider_dataset_egg_sacs",
"dataset:jkaunert/spider_dataset_eyes_visible",
"dataset:jkaunert/spider_dataset_female",
"dataset:jkaunert/spider_dataset_gravid",
"dataset:jkaunert/spider_dataset_in_retreat",
"dataset:jkaunert/spider_dataset_lateral_view",
"dataset:jkaunert/spider_dataset_male",
"dataset:jkaunert/spider_dataset_penultimate",
"dataset:jkaunert/spider_dataset_spiderlings",
"dataset:jkaunert/spider_dataset_web_present",
"dataset:jkaunert/spider_dataset_with_prey",
"base_model:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-03-08T05:29:26Z | ---
library_name: transformers
datasets:
- jkaunert/spider_dataset_dorsal_view
- jkaunert/spider_dataset_egg_sacs
- jkaunert/spider_dataset_eyes_visible
- jkaunert/spider_dataset_female
- jkaunert/spider_dataset_gravid
- jkaunert/spider_dataset_in_retreat
- jkaunert/spider_dataset_lateral_view
- jkaunert/spider_dataset_male
- jkaunert/spider_dataset_penultimate
- jkaunert/spider_dataset_spiderlings
- jkaunert/spider_dataset_web_present
- jkaunert/spider_dataset_with_prey
language:
- en
base_model:
- HuggingFaceTB/SmolLM2-1.7B-Instruct
---
# SmolVLM-Instruct-Spider-Classifier
SmolVLM-Instruct-Spider-Classifier is a fine-tuned version of SmolVLM that accepts arbitrary sequences of image and text inputs to produce text outputs.
SmolVLM-Instruct-Spider-Classifier is designed to analyze spiders in images, and attempt to determine their family, genus and/or species. Its lightweight
architecture makes it suitable for on-device applications while maintaining strong performance.
## Model Details
## Model Summary
- **Developed by:** Joshua Kaunert
- **Model type:** Multi-modal model (image+text)
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Architecture:** Based on [Idefics3](https://huggingface.co/HuggingFaceM4/Idefics3-8B-Llama3) (see technical summary)
## Resources
- **Demo:** [SmolVLM-Instruct-Spider-Classifier Demo]()
- **Blog:** [Blog post]()
## Uses
SmolVLM-Instruct-Spider-Classifier is fine-tuned to be used for inference of spider taxonomy on multimodal (image + text) tasks where the input comprises text queries along with one or more images of spiders. The model does not support image generation.
### Technical Summary
SmolVLM-Instruct-Spider-Classifier is a fine-tuned version of SmolVLM, which leverages the lightweight SmolLM2 language model to provide a compact yet powerful multimodal experience. It introduces several changes compared to previous Idefics models:
- **Image compression:** Introduces a more radical image compression compared to Idefics3 to enable the model to infer faster and use less RAM.
- **Visual Token Encoding:** SmolVLM uses 81 visual tokens to encode image patches of size 384×384. Larger images are divided into patches, each encoded separately, enhancing efficiency without compromising performance.
### How to get started
You can use transformers to load and infer SmolVLM-Instruct-Spider-Classifier.
```python
import torch
from PIL import Image
from transformers import AutoProcessor, AutoModelForVision2Seq
from transformers.image_utils import load_image
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
# Load images
image1 = load_image("https://citybugs.tamu.edu/wp-content/uploads/sites/3/2014/07/IMG_7929_sm.jpg")
image2 = load_image("https://southcoastbotanicgarden.org/wp-content/uploads/2022/11/Spiders-insects.jpg")
# Initialize processor and model
processor = AutoProcessor.from_pretrained("jkaunert/SmolVLM-Instruct-Spider-Classifier")
model = AutoModelForVision2Seq.from_pretrained(
"jkaunert/SmolVLM-Instruct-Spider-Classifier",
torch_dtype=torch.bfloat16,
_attn_implementation="flash_attention_2" if DEVICE == "cuda" else "eager",
).to(DEVICE)
# Create input messages
messages = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "image"},
{"type": "text", "text": "What type of spiders are in the two images?"}
]
},
]
# Prepare inputs
prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = processor(text=prompt, images=[image1, image2], return_tensors="pt")
inputs = inputs.to(DEVICE)
# Generate outputs
generated_ids = model.generate(**inputs, max_new_tokens=500)
generated_texts = processor.batch_decode(
generated_ids,
skip_special_tokens=True,
)
print(generated_texts[0])
"""
Wolf Spider
Black and Yellow Garden Spider
"""
```
### Model optimizations
**Precision**: For better performance, load and run the model in half-precision (`torch.float16` or `torch.bfloat16`) if your hardware supports it.
```python
from transformers import AutoModelForVision2Seq
import torch
model = AutoModelForVision2Seq.from_pretrained(
"jkaunert/SmolVLM-Instruct-Spider-Classifier",
torch_dtype=torch.bfloat16
).to("cuda")
```
You can also load SmolVLM-Instruct-Spider-Classifier with 4/8-bit quantization using bitsandbytes, torchao or Quanto. Refer to [this page](https://huggingface.co/docs/transformers/en/main_classes/quantization) for other options.
```python
from transformers import AutoModelForVision2Seq, BitsAndBytesConfig
import torch
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
model = AutoModelForVision2Seq.from_pretrained(
"jkaunert/SmolVLM-Instruct-Spider-Classifier",
quantization_config=quantization_config,
)
```
**Vision Encoder Efficiency**: Adjust the image resolution by setting `size={"longest_edge": N*384}` when initializing the processor, where N is your desired value. The default `N=4` works well, which results in input images of
size 1536×1536. For documents, `N=5` might be beneficial. Decreasing N can save GPU memory and is appropriate for lower-resolution images. This is also useful if you want to fine-tune on videos.
|
wATCH-Sophie-Rain-SpiderMan-New-VideoFree/Sophie.Rain.Spider-Man.Video.Tutorial | wATCH-Sophie-Rain-SpiderMan-New-VideoFree | 2025-03-08T06:14:52Z | 0 | 0 | null | [
"region:us"
] | null | 2025-03-08T06:13:38Z | Sophie Rain Spiderman Video Tutorial
<p><a href="https://link.rmg.co.uk/nude?updates" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://link.rmg.co.uk/nude?updates" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://link.rmg.co.uk/nude?updates" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p> |
jiatsol/First_llm | jiatsol | 2025-03-08T06:14:23Z | 0 | 0 | null | [
"safetensors",
"llama",
"trl",
"sft",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-08T01:23:03Z | ---
license: apache-2.0
tags:
- trl
- sft
---
|
YxBxRyXJx/Unsloth_QADS_ORPO_Qwen_14B_no1 | YxBxRyXJx | 2025-03-08T06:14:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-08T06:14:10Z | ---
base_model: unsloth/qwen2.5-14b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** YxBxRyXJx
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-14b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ChrisWeiWei/FirstModel | ChrisWeiWei | 2025-03-08T06:13:53Z | 0 | 0 | null | [
"safetensors",
"llama",
"trl",
"sft",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-08T01:23:05Z | ---
license: apache-2.0
tags:
- trl
- sft
---
|
kmugglet/landerv2 | kmugglet | 2025-03-08T06:13:30Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-03-08T06:13:09Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 234.46 +/- 21.12
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kaizen9/falcon3-1B-fp | kaizen9 | 2025-03-08T06:13:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-08T05:28:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Alphatao/83085f87-a61b-4e55-9e3a-5c94617602a4 | Alphatao | 2025-03-08T06:12:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"region:us"
] | null | 2025-03-08T00:47:29Z | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3-mini-128k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 83085f87-a61b-4e55-9e3a-5c94617602a4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3-mini-128k-instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1fed254576df142d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1fed254576df142d_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
device_map:
? ''
: 0,1,2,3,4,5,6,7
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
flash_attention: false
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id: Alphatao/83085f87-a61b-4e55-9e3a-5c94617602a4
hub_repo: null
hub_strategy: null
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.3
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 1734
micro_batch_size: 4
mlflow_experiment_name: /tmp/1fed254576df142d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.04
wandb_entity: null
wandb_mode: online
wandb_name: 9090c43e-ec93-46ec-8b87-d11083a1aa8d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9090c43e-ec93-46ec-8b87-d11083a1aa8d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 83085f87-a61b-4e55-9e3a-5c94617602a4
This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 1734
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.0314 | 0.0007 | 1 | 0.2233 |
| 1.4041 | 0.0682 | 100 | 0.1848 |
| 1.4015 | 0.1363 | 200 | 0.1847 |
| 1.3008 | 0.2045 | 300 | 0.1823 |
| 1.4179 | 0.2727 | 400 | 0.1815 |
| 1.2804 | 0.3408 | 500 | 0.1807 |
| 1.3661 | 0.4090 | 600 | 0.1787 |
| 1.7189 | 0.4772 | 700 | 0.1777 |
| 1.2251 | 0.5453 | 800 | 0.1764 |
| 1.5796 | 0.6135 | 900 | 0.1748 |
| 1.5114 | 0.6817 | 1000 | 0.1735 |
| 1.4334 | 0.7498 | 1100 | 0.1725 |
| 1.2593 | 0.8180 | 1200 | 0.1713 |
| 1.14 | 0.8862 | 1300 | 0.1704 |
| 1.2217 | 0.9543 | 1400 | 0.1698 |
| 0.9364 | 1.0225 | 1500 | 0.1717 |
| 1.0701 | 1.0907 | 1600 | 0.1721 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
xuxinyao123/r1-Distill-Qwen32B-JobDescription-LoRA | xuxinyao123 | 2025-03-08T06:12:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-08T06:12:01Z | ---
base_model: unsloth/deepseek-r1-distill-qwen-32b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** xuxinyao123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-32b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xuxinyao123/r1-Distill-Qwen32B-JobDescription | xuxinyao123 | 2025-03-08T06:11:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-08T05:47:25Z | ---
base_model: unsloth/deepseek-r1-distill-qwen-32b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** xuxinyao123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-32b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
azureWoo/HW1 | azureWoo | 2025-03-08T06:10:55Z | 0 | 0 | null | [
"safetensors",
"llama",
"trl",
"sft",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-08T01:23:03Z | ---
license: apache-2.0
tags:
- trl
- sft
---
|
genki10/ASAP_nosemanticV2_FineTuningBERT_AugV12_k7_task1_organization_k7_k7_fold1 | genki10 | 2025-03-08T06:10:40Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-08T05:28:37Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_nosemanticV2_FineTuningBERT_AugV12_k7_task1_organization_k7_k7_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_nosemanticV2_FineTuningBERT_AugV12_k7_task1_organization_k7_k7_fold1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8327
- Qwk: 0.3976
- Mse: 0.8323
- Rmse: 0.9123
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 1.0 | 6 | 8.5660 | 0.0 | 8.5634 | 2.9263 |
| No log | 2.0 | 12 | 4.3556 | 0.0 | 4.3536 | 2.0865 |
| No log | 3.0 | 18 | 2.3360 | -0.0257 | 2.3343 | 1.5278 |
| No log | 4.0 | 24 | 1.3149 | 0.0 | 1.3134 | 1.1461 |
| No log | 5.0 | 30 | 1.5626 | 0.1351 | 1.5613 | 1.2495 |
| No log | 6.0 | 36 | 1.2271 | 0.1048 | 1.2258 | 1.1072 |
| No log | 7.0 | 42 | 0.9125 | 0.2193 | 0.9114 | 0.9547 |
| No log | 8.0 | 48 | 0.9870 | 0.2990 | 0.9861 | 0.9930 |
| No log | 9.0 | 54 | 1.2005 | 0.2640 | 1.1997 | 1.0953 |
| No log | 10.0 | 60 | 1.8297 | 0.1706 | 1.8292 | 1.3525 |
| No log | 11.0 | 66 | 1.4883 | 0.2663 | 1.4881 | 1.2199 |
| No log | 12.0 | 72 | 1.3868 | 0.2591 | 1.3865 | 1.1775 |
| No log | 13.0 | 78 | 1.3561 | 0.2743 | 1.3559 | 1.1644 |
| No log | 14.0 | 84 | 0.9493 | 0.3647 | 0.9491 | 0.9742 |
| No log | 15.0 | 90 | 0.9508 | 0.4053 | 0.9507 | 0.9750 |
| No log | 16.0 | 96 | 1.0064 | 0.3290 | 1.0062 | 1.0031 |
| No log | 17.0 | 102 | 1.3650 | 0.2374 | 1.3646 | 1.1682 |
| No log | 18.0 | 108 | 1.2152 | 0.2955 | 1.2148 | 1.1022 |
| No log | 19.0 | 114 | 0.9137 | 0.4110 | 0.9134 | 0.9557 |
| No log | 20.0 | 120 | 0.7971 | 0.4204 | 0.7968 | 0.8926 |
| No log | 21.0 | 126 | 0.8015 | 0.4281 | 0.8013 | 0.8952 |
| No log | 22.0 | 132 | 1.1552 | 0.2907 | 1.1547 | 1.0746 |
| No log | 23.0 | 138 | 1.3357 | 0.2812 | 1.3351 | 1.1554 |
| No log | 24.0 | 144 | 0.8835 | 0.3462 | 0.8830 | 0.9397 |
| No log | 25.0 | 150 | 0.8720 | 0.3616 | 0.8716 | 0.9336 |
| No log | 26.0 | 156 | 1.0973 | 0.3180 | 1.0969 | 1.0473 |
| No log | 27.0 | 162 | 0.8798 | 0.3740 | 0.8796 | 0.9378 |
| No log | 28.0 | 168 | 0.9333 | 0.2919 | 0.9330 | 0.9659 |
| No log | 29.0 | 174 | 0.9176 | 0.3745 | 0.9173 | 0.9578 |
| No log | 30.0 | 180 | 0.7844 | 0.3962 | 0.7840 | 0.8855 |
| No log | 31.0 | 186 | 0.8459 | 0.3755 | 0.8455 | 0.9195 |
| No log | 32.0 | 192 | 0.7618 | 0.4214 | 0.7614 | 0.8726 |
| No log | 33.0 | 198 | 0.8272 | 0.4329 | 0.8269 | 0.9093 |
| No log | 34.0 | 204 | 0.7425 | 0.4277 | 0.7421 | 0.8615 |
| No log | 35.0 | 210 | 0.7926 | 0.4160 | 0.7923 | 0.8901 |
| No log | 36.0 | 216 | 0.7006 | 0.4421 | 0.7003 | 0.8368 |
| No log | 37.0 | 222 | 0.9709 | 0.3262 | 0.9703 | 0.9851 |
| No log | 38.0 | 228 | 0.7722 | 0.4426 | 0.7718 | 0.8785 |
| No log | 39.0 | 234 | 0.8496 | 0.3538 | 0.8493 | 0.9215 |
| No log | 40.0 | 240 | 0.8142 | 0.3669 | 0.8139 | 0.9021 |
| No log | 41.0 | 246 | 0.7794 | 0.4173 | 0.7791 | 0.8826 |
| No log | 42.0 | 252 | 0.7848 | 0.4010 | 0.7844 | 0.8856 |
| No log | 43.0 | 258 | 0.8955 | 0.3481 | 0.8951 | 0.9461 |
| No log | 44.0 | 264 | 0.8009 | 0.3883 | 0.8005 | 0.8947 |
| No log | 45.0 | 270 | 0.8590 | 0.4061 | 0.8586 | 0.9266 |
| No log | 46.0 | 276 | 0.8759 | 0.3669 | 0.8754 | 0.9356 |
| No log | 47.0 | 282 | 0.8940 | 0.3971 | 0.8936 | 0.9453 |
| No log | 48.0 | 288 | 0.7105 | 0.4454 | 0.7101 | 0.8427 |
| No log | 49.0 | 294 | 0.7844 | 0.4154 | 0.7840 | 0.8855 |
| No log | 50.0 | 300 | 0.7501 | 0.4294 | 0.7497 | 0.8658 |
| No log | 51.0 | 306 | 0.9443 | 0.3583 | 0.9437 | 0.9714 |
| No log | 52.0 | 312 | 0.8329 | 0.3818 | 0.8325 | 0.9124 |
| No log | 53.0 | 318 | 0.7643 | 0.4224 | 0.7639 | 0.8740 |
| No log | 54.0 | 324 | 0.8095 | 0.3896 | 0.8092 | 0.8995 |
| No log | 55.0 | 330 | 0.7666 | 0.4208 | 0.7662 | 0.8753 |
| No log | 56.0 | 336 | 0.7739 | 0.4078 | 0.7735 | 0.8795 |
| No log | 57.0 | 342 | 0.7472 | 0.4494 | 0.7468 | 0.8642 |
| No log | 58.0 | 348 | 0.7146 | 0.4491 | 0.7143 | 0.8452 |
| No log | 59.0 | 354 | 0.8430 | 0.3948 | 0.8426 | 0.9179 |
| No log | 60.0 | 360 | 0.7941 | 0.3836 | 0.7937 | 0.8909 |
| No log | 61.0 | 366 | 0.7308 | 0.4265 | 0.7305 | 0.8547 |
| No log | 62.0 | 372 | 0.7799 | 0.4304 | 0.7795 | 0.8829 |
| No log | 63.0 | 378 | 0.8374 | 0.3505 | 0.8370 | 0.9149 |
| No log | 64.0 | 384 | 0.7378 | 0.4154 | 0.7375 | 0.8588 |
| No log | 65.0 | 390 | 0.7359 | 0.4359 | 0.7356 | 0.8577 |
| No log | 66.0 | 396 | 0.8004 | 0.4094 | 0.8000 | 0.8944 |
| No log | 67.0 | 402 | 0.7774 | 0.4164 | 0.7770 | 0.8815 |
| No log | 68.0 | 408 | 0.8016 | 0.4308 | 0.8012 | 0.8951 |
| No log | 69.0 | 414 | 0.7948 | 0.4181 | 0.7943 | 0.8913 |
| No log | 70.0 | 420 | 0.7841 | 0.4272 | 0.7838 | 0.8853 |
| No log | 71.0 | 426 | 0.7494 | 0.4312 | 0.7490 | 0.8655 |
| No log | 72.0 | 432 | 0.7778 | 0.4054 | 0.7774 | 0.8817 |
| No log | 73.0 | 438 | 0.8187 | 0.3944 | 0.8182 | 0.9046 |
| No log | 74.0 | 444 | 0.7657 | 0.4435 | 0.7654 | 0.8748 |
| No log | 75.0 | 450 | 0.8143 | 0.4063 | 0.8139 | 0.9022 |
| No log | 76.0 | 456 | 0.7605 | 0.4289 | 0.7602 | 0.8719 |
| No log | 77.0 | 462 | 0.8327 | 0.3976 | 0.8323 | 0.9123 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
frank1401/homework1140308 | frank1401 | 2025-03-08T06:09:44Z | 0 | 0 | null | [
"safetensors",
"llama",
"trl",
"sft",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-08T01:23:03Z | ---
license: apache-2.0
tags:
- trl
- sft
---
|
wenh2004/chatglm3-lora-legal | wenh2004 | 2025-03-08T06:09:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-08T06:09:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Cydonia-24B-v2.1-GGUF | mradermacher | 2025-03-08T06:08:02Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:TheDrummer/Cydonia-24B-v2.1",
"base_model:quantized:TheDrummer/Cydonia-24B-v2.1",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-08T04:01:03Z | ---
base_model: TheDrummer/Cydonia-24B-v2.1
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TheDrummer/Cydonia-24B-v2.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Cydonia-24B-v2.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Cydonia-24B-v2.1-GGUF/resolve/main/Cydonia-24B-v2.1.Q2_K.gguf) | Q2_K | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/Cydonia-24B-v2.1-GGUF/resolve/main/Cydonia-24B-v2.1.Q3_K_S.gguf) | Q3_K_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Cydonia-24B-v2.1-GGUF/resolve/main/Cydonia-24B-v2.1.Q3_K_M.gguf) | Q3_K_M | 11.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Cydonia-24B-v2.1-GGUF/resolve/main/Cydonia-24B-v2.1.Q3_K_L.gguf) | Q3_K_L | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/Cydonia-24B-v2.1-GGUF/resolve/main/Cydonia-24B-v2.1.IQ4_XS.gguf) | IQ4_XS | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/Cydonia-24B-v2.1-GGUF/resolve/main/Cydonia-24B-v2.1.Q4_K_S.gguf) | Q4_K_S | 13.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Cydonia-24B-v2.1-GGUF/resolve/main/Cydonia-24B-v2.1.Q4_K_M.gguf) | Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Cydonia-24B-v2.1-GGUF/resolve/main/Cydonia-24B-v2.1.Q5_K_S.gguf) | Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/Cydonia-24B-v2.1-GGUF/resolve/main/Cydonia-24B-v2.1.Q5_K_M.gguf) | Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/Cydonia-24B-v2.1-GGUF/resolve/main/Cydonia-24B-v2.1.Q6_K.gguf) | Q6_K | 19.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Cydonia-24B-v2.1-GGUF/resolve/main/Cydonia-24B-v2.1.Q8_0.gguf) | Q8_0 | 25.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
texanrangee/cb5cc7da-6009-4e41-8afa-4d09161ecade | texanrangee | 2025-03-08T06:06:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-08T00:50:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Peaceuai/model_3 | Peaceuai | 2025-03-08T06:01:22Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-08T06:00:24Z | ---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Peaceuai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sophiayk20/m2m100_418M_pt_formal | sophiayk20 | 2025-03-08T06:00:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/m2m100_418M",
"base_model:finetune:facebook/m2m100_418M",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-03-08T01:43:11Z | ---
library_name: transformers
license: mit
base_model: facebook/m2m100_418M
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: m2m100_418M_pt_formal
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M_pt_formal
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3145
- Bleu: 40.5054
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 4.0262 | 0.3956 | 500 | 0.3792 | 36.1929 |
| 0.3736 | 0.7911 | 1000 | 0.3307 | 38.8103 |
| 0.3346 | 1.1867 | 1500 | 0.3236 | 39.2764 |
| 0.3118 | 1.5823 | 2000 | 0.3194 | 39.7404 |
| 0.3084 | 1.9778 | 2500 | 0.3160 | 40.0558 |
| 0.2856 | 2.3734 | 3000 | 0.3154 | 40.2045 |
| 0.277 | 2.7690 | 3500 | 0.3145 | 40.5054 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
Nayana-cognitivelab/Nayana-IR-colsmol_v0_1-hi-12k-4bit | Nayana-cognitivelab | 2025-03-08T06:00:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-08T06:00:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TArtx/parler-tts-mini-narrated-30 | TArtx | 2025-03-08T05:59:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-03-08T05:42:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/DSR1-Qwen-32B-still-GGUF | mradermacher | 2025-03-08T05:58:08Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"en",
"base_model:moogician/DSR1-Qwen-32B-still",
"base_model:quantized:moogician/DSR1-Qwen-32B-still",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-08T05:11:01Z | ---
base_model: moogician/DSR1-Qwen-32B-still
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- llama-factory
- full
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/moogician/DSR1-Qwen-32B-still
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DSR1-Qwen-32B-still-GGUF/resolve/main/DSR1-Qwen-32B-still.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/DSR1-Qwen-32B-still-GGUF/resolve/main/DSR1-Qwen-32B-still.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/DSR1-Qwen-32B-still-GGUF/resolve/main/DSR1-Qwen-32B-still.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DSR1-Qwen-32B-still-GGUF/resolve/main/DSR1-Qwen-32B-still.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/DSR1-Qwen-32B-still-GGUF/resolve/main/DSR1-Qwen-32B-still.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/DSR1-Qwen-32B-still-GGUF/resolve/main/DSR1-Qwen-32B-still.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DSR1-Qwen-32B-still-GGUF/resolve/main/DSR1-Qwen-32B-still.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DSR1-Qwen-32B-still-GGUF/resolve/main/DSR1-Qwen-32B-still.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/DSR1-Qwen-32B-still-GGUF/resolve/main/DSR1-Qwen-32B-still.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/DSR1-Qwen-32B-still-GGUF/resolve/main/DSR1-Qwen-32B-still.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DSR1-Qwen-32B-still-GGUF/resolve/main/DSR1-Qwen-32B-still.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Sunnyboon007/Lola | Sunnyboon007 | 2025-03-08T05:57:29Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-03-08T05:57:29Z | ---
license: apache-2.0
---
|
mlx-community/Preferred-MedLLM-Qwen-72B-8bit | mlx-community | 2025-03-08T05:57:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mlx",
"en",
"ja",
"base_model:pfnet/Preferred-MedLLM-Qwen-72B",
"base_model:quantized:pfnet/Preferred-MedLLM-Qwen-72B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] | text-generation | 2025-03-08T04:17:29Z | ---
base_model: pfnet/Preferred-MedLLM-Qwen-72B
language:
- en
- ja
library_name: transformers
pipeline_tag: text-generation
license: other
license_name: qwen
tags:
- mlx
---
# mlx-community/Preferred-MedLLM-Qwen-72B-8bit
The Model [mlx-community/Preferred-MedLLM-Qwen-72B-8bit](https://huggingface.co/mlx-community/Preferred-MedLLM-Qwen-72B-8bit) was
converted to MLX format from [pfnet/Preferred-MedLLM-Qwen-72B](https://huggingface.co/pfnet/Preferred-MedLLM-Qwen-72B)
using mlx-lm version **0.21.5**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Preferred-MedLLM-Qwen-72B-8bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
BICORP/Test-16 | BICORP | 2025-03-08T05:56:32Z | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-03-07T18:40:47Z | ---
license: apache-2.0
---
|
MOtifssss/Qwen2.5-1.5B-Open-R1-Distill | MOtifssss | 2025-03-08T05:53:36Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-14T03:56:25Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Open-R1-Distill
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-1.5B-Open-R1-Distill
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MOtifssss/Qwen2.5-1.5B-Open-R1-Distill", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/gaotang/Self-Reflection%20Fine-tuning/runs/gvl1pohh)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Nayana-cognitivelab/Nayana-IR-colpali_v1_3-combined-15k-4bit | Nayana-cognitivelab | 2025-03-08T05:50:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-08T05:50:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
enuma-elis/Llama-3.3-70B-Instruct-bnb-4bit | enuma-elis | 2025-03-08T05:50:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.3-70B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.3-70B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-08T05:04:32Z | ---
base_model: unsloth/Llama-3.3-70B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** enuma-elis
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.3-70B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Nayana-cognitivelab/Nayana-IR-colpali_v1_3-kn-12k-4bit | Nayana-cognitivelab | 2025-03-08T05:48:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-08T05:48:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
XingYu520/svc | XingYu520 | 2025-03-08T05:46:40Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-03-08T05:46:40Z | ---
license: apache-2.0
---
|
Nayana-cognitivelab/Nayana-IR-colpali_v1_3-hi-47k-4bit | Nayana-cognitivelab | 2025-03-08T05:45:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-08T05:45:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/DeepThinkers-Phi4-GGUF | mradermacher | 2025-03-08T05:41:41Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:EpistemeAI/DeepThinkers-Phi4",
"base_model:quantized:EpistemeAI/DeepThinkers-Phi4",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-08T04:06:29Z | ---
base_model: EpistemeAI/DeepThinkers-Phi4
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/EpistemeAI/DeepThinkers-Phi4
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/DeepThinkers-Phi4-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepThinkers-Phi4-GGUF/resolve/main/DeepThinkers-Phi4.Q2_K.gguf) | Q2_K | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/DeepThinkers-Phi4-GGUF/resolve/main/DeepThinkers-Phi4.Q3_K_S.gguf) | Q3_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/DeepThinkers-Phi4-GGUF/resolve/main/DeepThinkers-Phi4.Q3_K_M.gguf) | Q3_K_M | 7.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeepThinkers-Phi4-GGUF/resolve/main/DeepThinkers-Phi4.Q3_K_L.gguf) | Q3_K_L | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/DeepThinkers-Phi4-GGUF/resolve/main/DeepThinkers-Phi4.IQ4_XS.gguf) | IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/DeepThinkers-Phi4-GGUF/resolve/main/DeepThinkers-Phi4.Q4_K_S.gguf) | Q4_K_S | 8.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepThinkers-Phi4-GGUF/resolve/main/DeepThinkers-Phi4.Q4_K_M.gguf) | Q4_K_M | 9.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepThinkers-Phi4-GGUF/resolve/main/DeepThinkers-Phi4.Q5_K_S.gguf) | Q5_K_S | 10.3 | |
| [GGUF](https://huggingface.co/mradermacher/DeepThinkers-Phi4-GGUF/resolve/main/DeepThinkers-Phi4.Q5_K_M.gguf) | Q5_K_M | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/DeepThinkers-Phi4-GGUF/resolve/main/DeepThinkers-Phi4.Q6_K.gguf) | Q6_K | 12.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DeepThinkers-Phi4-GGUF/resolve/main/DeepThinkers-Phi4.Q8_0.gguf) | Q8_0 | 15.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/leads-mistral-7b-v1-GGUF | mradermacher | 2025-03-08T05:41:41Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:zifeng-ai/leads-mistral-7b-v1",
"base_model:quantized:zifeng-ai/leads-mistral-7b-v1",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-08T04:20:28Z | ---
base_model: zifeng-ai/leads-mistral-7b-v1
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/zifeng-ai/leads-mistral-7b-v1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/leads-mistral-7b-v1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/leads-mistral-7b-v1-GGUF/resolve/main/leads-mistral-7b-v1.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/leads-mistral-7b-v1-GGUF/resolve/main/leads-mistral-7b-v1.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/leads-mistral-7b-v1-GGUF/resolve/main/leads-mistral-7b-v1.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/leads-mistral-7b-v1-GGUF/resolve/main/leads-mistral-7b-v1.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/leads-mistral-7b-v1-GGUF/resolve/main/leads-mistral-7b-v1.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/leads-mistral-7b-v1-GGUF/resolve/main/leads-mistral-7b-v1.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/leads-mistral-7b-v1-GGUF/resolve/main/leads-mistral-7b-v1.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/leads-mistral-7b-v1-GGUF/resolve/main/leads-mistral-7b-v1.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/leads-mistral-7b-v1-GGUF/resolve/main/leads-mistral-7b-v1.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/leads-mistral-7b-v1-GGUF/resolve/main/leads-mistral-7b-v1.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/leads-mistral-7b-v1-GGUF/resolve/main/leads-mistral-7b-v1.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/leads-mistral-7b-v1-GGUF/resolve/main/leads-mistral-7b-v1.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
lesso13/3d3dbb29-e18e-4edb-b7d8-6f569dfffd83 | lesso13 | 2025-03-08T05:39:23Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"base_model:adapter:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"region:us"
] | null | 2025-03-08T03:29:04Z | ---
library_name: peft
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3d3dbb29-e18e-4edb-b7d8-6f569dfffd83
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1a2cc6d384a11c08_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1a2cc6d384a11c08_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso13/3d3dbb29-e18e-4edb-b7d8-6f569dfffd83
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000213
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/1a2cc6d384a11c08_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 130
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2d6e8b65-2a1a-4258-ac7a-632a18b74ff6
wandb_project: 13a
wandb_run: your_name
wandb_runid: 2d6e8b65-2a1a-4258-ac7a-632a18b74ff6
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3d3dbb29-e18e-4edb-b7d8-6f569dfffd83
This model is a fine-tuned version of [MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4](https://huggingface.co/MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5126
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000213
- train_batch_size: 4
- eval_batch_size: 4
- seed: 130
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 2.2911 |
| 1.5286 | 0.1132 | 500 | 1.5126 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso16/5852b5e9-8156-4032-b100-3170940cd041 | lesso16 | 2025-03-08T05:39:00Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"base_model:adapter:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"region:us"
] | null | 2025-03-08T03:29:08Z | ---
library_name: peft
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5852b5e9-8156-4032-b100-3170940cd041
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1a2cc6d384a11c08_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1a2cc6d384a11c08_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso16/5852b5e9-8156-4032-b100-3170940cd041
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000216
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/1a2cc6d384a11c08_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 160
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2d6e8b65-2a1a-4258-ac7a-632a18b74ff6
wandb_project: 16a
wandb_run: your_name
wandb_runid: 2d6e8b65-2a1a-4258-ac7a-632a18b74ff6
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5852b5e9-8156-4032-b100-3170940cd041
This model is a fine-tuned version of [MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4](https://huggingface.co/MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5113
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000216
- train_batch_size: 4
- eval_batch_size: 4
- seed: 160
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 2.2907 |
| 1.5151 | 0.1132 | 500 | 1.5113 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
John6666/sdxl-noobsim-v46vpred-ultrares-v46noobsimvpred15-sdxl | John6666 | 2025-03-08T05:35:35Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"realistic",
"photorealistic",
"CLIP_L_OMEGAβ",
"CLIP_G_OMEGAβ",
"finetune",
"experiment",
"v-pred",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-03-08T05:30:28Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- realistic
- photorealistic
- CLIP_L_OMEGAβ
- CLIP_G_OMEGAβ
- finetune
- experiment
- v-pred
---
Original model is [here](https://civitai.com/models/1177470?modelVersionId=1504726).
This model created by [AbstractPhila](https://civitai.com/user/AbstractPhila). |
lesso11/549b03ab-54a6-48ce-9b14-3f40ae246b2c | lesso11 | 2025-03-08T05:34:09Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-135M",
"base_model:adapter:unsloth/SmolLM-135M",
"license:apache-2.0",
"region:us"
] | null | 2025-03-08T00:13:56Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-135M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 549b03ab-54a6-48ce-9b14-3f40ae246b2c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-135M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d148afb262ef385c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d148afb262ef385c_train_data.json
type:
field_input: alpaca_prompt
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso11/549b03ab-54a6-48ce-9b14-3f40ae246b2c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000211
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 7000
micro_batch_size: 4
mlflow_experiment_name: /tmp/d148afb262ef385c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 110
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 104e868d-5498-4c04-8a6e-af6f3717fe60
wandb_project: 11a
wandb_run: your_name
wandb_runid: 104e868d-5498-4c04-8a6e-af6f3717fe60
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 549b03ab-54a6-48ce-9b14-3f40ae246b2c
This model is a fine-tuned version of [unsloth/SmolLM-135M](https://huggingface.co/unsloth/SmolLM-135M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000211
- train_batch_size: 4
- eval_batch_size: 4
- seed: 110
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 7000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 2.3279 |
| 1.9823 | 0.2722 | 500 | 1.9859 |
| 1.995 | 0.5444 | 1000 | 1.9599 |
| 1.9828 | 0.8166 | 1500 | 1.9339 |
| 1.8594 | 1.0892 | 2000 | 1.9220 |
| 1.842 | 1.3614 | 2500 | 1.9010 |
| 1.7988 | 1.6336 | 3000 | 1.8846 |
| 1.7691 | 1.9058 | 3500 | 1.8621 |
| 1.6727 | 2.1784 | 4000 | 1.8448 |
| 1.7137 | 2.4506 | 4500 | 1.8377 |
| 1.7098 | 2.7228 | 5000 | 1.8284 |
| 1.6372 | 2.9950 | 5500 | 1.8186 |
| 1.6218 | 3.2676 | 6000 | 1.8223 |
| 1.6231 | 3.5398 | 6500 | 1.8248 |
| 1.6592 | 3.8120 | 7000 | 1.8269 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/DPSK-Distill-32B-Token-GGUF | mradermacher | 2025-03-08T05:32:38Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Viol2000/DPSK-Distill-32B-Token",
"base_model:quantized:Viol2000/DPSK-Distill-32B-Token",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-08T04:06:52Z | ---
base_model: Viol2000/DPSK-Distill-32B-Token
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Viol2000/DPSK-Distill-32B-Token
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DPSK-Distill-32B-Token-GGUF/resolve/main/DPSK-Distill-32B-Token.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/DPSK-Distill-32B-Token-GGUF/resolve/main/DPSK-Distill-32B-Token.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/DPSK-Distill-32B-Token-GGUF/resolve/main/DPSK-Distill-32B-Token.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DPSK-Distill-32B-Token-GGUF/resolve/main/DPSK-Distill-32B-Token.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/DPSK-Distill-32B-Token-GGUF/resolve/main/DPSK-Distill-32B-Token.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/DPSK-Distill-32B-Token-GGUF/resolve/main/DPSK-Distill-32B-Token.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DPSK-Distill-32B-Token-GGUF/resolve/main/DPSK-Distill-32B-Token.Q4_K_M.gguf) | Q4_K_M | 19.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DPSK-Distill-32B-Token-GGUF/resolve/main/DPSK-Distill-32B-Token.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/DPSK-Distill-32B-Token-GGUF/resolve/main/DPSK-Distill-32B-Token.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/DPSK-Distill-32B-Token-GGUF/resolve/main/DPSK-Distill-32B-Token.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DPSK-Distill-32B-Token-GGUF/resolve/main/DPSK-Distill-32B-Token.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
genki10/ASAP_nosemanticV2_FineTuningBERT_AugV12_k7_task1_organization_k7_k7_fold0 | genki10 | 2025-03-08T05:28:29Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-08T04:43:29Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_nosemanticV2_FineTuningBERT_AugV12_k7_task1_organization_k7_k7_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_nosemanticV2_FineTuningBERT_AugV12_k7_task1_organization_k7_k7_fold0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7375
- Qwk: 0.5133
- Mse: 0.7375
- Rmse: 0.8588
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 6 | 6.7808 | 0.0 | 6.7808 | 2.6040 |
| No log | 2.0 | 12 | 4.2243 | 0.0039 | 4.2243 | 2.0553 |
| No log | 3.0 | 18 | 2.0865 | 0.0944 | 2.0865 | 1.4445 |
| No log | 4.0 | 24 | 1.2136 | 0.0316 | 1.2136 | 1.1016 |
| No log | 5.0 | 30 | 1.2239 | 0.0548 | 1.2239 | 1.1063 |
| No log | 6.0 | 36 | 1.0788 | 0.1002 | 1.0788 | 1.0386 |
| No log | 7.0 | 42 | 1.0530 | 0.2204 | 1.0530 | 1.0262 |
| No log | 8.0 | 48 | 0.7780 | 0.3963 | 0.7780 | 0.8821 |
| No log | 9.0 | 54 | 0.7862 | 0.4230 | 0.7862 | 0.8867 |
| No log | 10.0 | 60 | 0.8138 | 0.4185 | 0.8138 | 0.9021 |
| No log | 11.0 | 66 | 0.6654 | 0.4144 | 0.6654 | 0.8157 |
| No log | 12.0 | 72 | 0.7386 | 0.5279 | 0.7386 | 0.8594 |
| No log | 13.0 | 78 | 0.8035 | 0.5002 | 0.8035 | 0.8964 |
| No log | 14.0 | 84 | 0.7581 | 0.5051 | 0.7581 | 0.8707 |
| No log | 15.0 | 90 | 0.7233 | 0.5089 | 0.7233 | 0.8505 |
| No log | 16.0 | 96 | 0.8051 | 0.4401 | 0.8051 | 0.8972 |
| No log | 17.0 | 102 | 0.7567 | 0.4941 | 0.7567 | 0.8699 |
| No log | 18.0 | 108 | 0.8707 | 0.4863 | 0.8707 | 0.9331 |
| No log | 19.0 | 114 | 1.1974 | 0.3453 | 1.1974 | 1.0943 |
| No log | 20.0 | 120 | 0.7859 | 0.4799 | 0.7859 | 0.8865 |
| No log | 21.0 | 126 | 1.0138 | 0.3648 | 1.0138 | 1.0069 |
| No log | 22.0 | 132 | 0.7047 | 0.5068 | 0.7047 | 0.8395 |
| No log | 23.0 | 138 | 0.8307 | 0.4661 | 0.8307 | 0.9114 |
| No log | 24.0 | 144 | 0.7614 | 0.5073 | 0.7614 | 0.8726 |
| No log | 25.0 | 150 | 0.8226 | 0.4655 | 0.8226 | 0.9070 |
| No log | 26.0 | 156 | 0.8165 | 0.4613 | 0.8165 | 0.9036 |
| No log | 27.0 | 162 | 0.9831 | 0.4721 | 0.9831 | 0.9915 |
| No log | 28.0 | 168 | 1.0394 | 0.4185 | 1.0394 | 1.0195 |
| No log | 29.0 | 174 | 0.7373 | 0.5216 | 0.7373 | 0.8587 |
| No log | 30.0 | 180 | 0.9787 | 0.4235 | 0.9787 | 0.9893 |
| No log | 31.0 | 186 | 0.8639 | 0.4846 | 0.8639 | 0.9295 |
| No log | 32.0 | 192 | 0.7515 | 0.5432 | 0.7515 | 0.8669 |
| No log | 33.0 | 198 | 0.9501 | 0.4566 | 0.9501 | 0.9747 |
| No log | 34.0 | 204 | 0.9949 | 0.4107 | 0.9949 | 0.9974 |
| No log | 35.0 | 210 | 1.0571 | 0.4636 | 1.0571 | 1.0281 |
| No log | 36.0 | 216 | 0.9711 | 0.4803 | 0.9711 | 0.9854 |
| No log | 37.0 | 222 | 0.9057 | 0.4351 | 0.9057 | 0.9517 |
| No log | 38.0 | 228 | 1.0173 | 0.4779 | 1.0173 | 1.0086 |
| No log | 39.0 | 234 | 0.8642 | 0.5086 | 0.8642 | 0.9296 |
| No log | 40.0 | 240 | 0.8859 | 0.4795 | 0.8859 | 0.9412 |
| No log | 41.0 | 246 | 0.8663 | 0.4843 | 0.8663 | 0.9307 |
| No log | 42.0 | 252 | 0.8377 | 0.5049 | 0.8377 | 0.9152 |
| No log | 43.0 | 258 | 0.8322 | 0.5052 | 0.8322 | 0.9122 |
| No log | 44.0 | 264 | 0.6407 | 0.5504 | 0.6407 | 0.8004 |
| No log | 45.0 | 270 | 0.9204 | 0.4813 | 0.9204 | 0.9594 |
| No log | 46.0 | 276 | 0.9213 | 0.4563 | 0.9213 | 0.9598 |
| No log | 47.0 | 282 | 1.0325 | 0.4318 | 1.0325 | 1.0161 |
| No log | 48.0 | 288 | 0.6882 | 0.5536 | 0.6882 | 0.8296 |
| No log | 49.0 | 294 | 0.6289 | 0.5331 | 0.6289 | 0.7931 |
| No log | 50.0 | 300 | 0.8150 | 0.5143 | 0.8150 | 0.9028 |
| No log | 51.0 | 306 | 0.8232 | 0.5139 | 0.8232 | 0.9073 |
| No log | 52.0 | 312 | 0.8732 | 0.5006 | 0.8732 | 0.9344 |
| No log | 53.0 | 318 | 0.7829 | 0.4945 | 0.7829 | 0.8848 |
| No log | 54.0 | 324 | 0.7116 | 0.4936 | 0.7116 | 0.8436 |
| No log | 55.0 | 330 | 0.7365 | 0.4995 | 0.7365 | 0.8582 |
| No log | 56.0 | 336 | 0.8263 | 0.4929 | 0.8263 | 0.9090 |
| No log | 57.0 | 342 | 0.7782 | 0.5277 | 0.7782 | 0.8822 |
| No log | 58.0 | 348 | 0.8694 | 0.5090 | 0.8694 | 0.9324 |
| No log | 59.0 | 354 | 0.8633 | 0.5002 | 0.8633 | 0.9292 |
| No log | 60.0 | 360 | 0.8805 | 0.4966 | 0.8805 | 0.9383 |
| No log | 61.0 | 366 | 0.7954 | 0.5146 | 0.7954 | 0.8919 |
| No log | 62.0 | 372 | 0.6620 | 0.5549 | 0.6620 | 0.8137 |
| No log | 63.0 | 378 | 0.9616 | 0.4712 | 0.9616 | 0.9806 |
| No log | 64.0 | 384 | 0.8070 | 0.5131 | 0.8070 | 0.8983 |
| No log | 65.0 | 390 | 0.7670 | 0.5010 | 0.7670 | 0.8758 |
| No log | 66.0 | 396 | 0.7910 | 0.5145 | 0.7910 | 0.8894 |
| No log | 67.0 | 402 | 0.7502 | 0.5119 | 0.7502 | 0.8662 |
| No log | 68.0 | 408 | 0.9059 | 0.4825 | 0.9059 | 0.9518 |
| No log | 69.0 | 414 | 0.7574 | 0.4987 | 0.7574 | 0.8703 |
| No log | 70.0 | 420 | 0.6676 | 0.5253 | 0.6676 | 0.8171 |
| No log | 71.0 | 426 | 0.9502 | 0.4772 | 0.9502 | 0.9748 |
| No log | 72.0 | 432 | 0.6744 | 0.5363 | 0.6744 | 0.8212 |
| No log | 73.0 | 438 | 0.7808 | 0.5107 | 0.7808 | 0.8836 |
| No log | 74.0 | 444 | 0.7827 | 0.5203 | 0.7827 | 0.8847 |
| No log | 75.0 | 450 | 0.7132 | 0.5079 | 0.7132 | 0.8445 |
| No log | 76.0 | 456 | 0.7905 | 0.5028 | 0.7905 | 0.8891 |
| No log | 77.0 | 462 | 0.8389 | 0.5044 | 0.8389 | 0.9159 |
| No log | 78.0 | 468 | 0.6327 | 0.5523 | 0.6327 | 0.7954 |
| No log | 79.0 | 474 | 0.7644 | 0.5113 | 0.7644 | 0.8743 |
| No log | 80.0 | 480 | 0.7303 | 0.5021 | 0.7303 | 0.8546 |
| No log | 81.0 | 486 | 0.7201 | 0.5023 | 0.7201 | 0.8486 |
| No log | 82.0 | 492 | 0.7375 | 0.5133 | 0.7375 | 0.8588 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
OgiServiceDesigner/rlhf-Llama-3.2-1B | OgiServiceDesigner | 2025-03-08T05:28:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-03-08T05:26:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
davidmaestrecic/agency_cic_model-david-1 | davidmaestrecic | 2025-03-08T05:26:40Z | 3 | 0 | peft | [
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"region:us"
] | null | 2025-03-05T03:30:00Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
LeroyDyer/_Spydaz_Web_AI_AGI_R1_OmG_MathMaster | LeroyDyer | 2025-03-08T05:24:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:LeroyDyer/_Spydaz_Web_AI_AGI_R1_Math_Master",
"base_model:finetune:LeroyDyer/_Spydaz_Web_AI_AGI_R1_Math_Master",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-08T05:17:43Z | ---
base_model: LeroyDyer/_Spydaz_Web_AI_AGI_R1_Math_Master
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** LeroyDyer
- **License:** apache-2.0
- **Finetuned from model :** LeroyDyer/_Spydaz_Web_AI_AGI_R1_Math_Master
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nossie0360/q-FrozenLake-v1-4x4-noSlippery | nossie0360 | 2025-03-08T05:24:16Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-03-08T05:24:13Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="nossie0360/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
John6666/cinero-illustrious-v4fp8-sdxl | John6666 | 2025-03-08T05:22:33Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"realism",
"portrait",
"photography",
"creative",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-03-08T05:17:28Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- realism
- portrait
- photography
- creative
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1332040/cinero-illustrious-v4?modelVersionId=1503990).
This model created by [homoludens](https://civitai.com/user/homoludens).
|
jonathansculley/Reinforce-Pixelcopter-PLE-v0 | jonathansculley | 2025-03-08T05:21:11Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2025-03-08T04:43:27Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 81.70 +/- 46.68
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
fats-fme/969a6c78-290a-48f6-a49e-95855e666555 | fats-fme | 2025-03-08T05:21:10Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Genstruct-7B",
"base_model:adapter:NousResearch/Genstruct-7B",
"license:apache-2.0",
"region:us"
] | null | 2025-03-08T04:08:07Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Genstruct-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 969a6c78-290a-48f6-a49e-95855e666555
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Genstruct-7B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 75b2f36e9fccddee_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/75b2f36e9fccddee_train_data.json
type:
field_instruction: tools
field_output: func_desc
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: fats-fme/969a6c78-290a-48f6-a49e-95855e666555
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 256
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 128
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 70GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/75b2f36e9fccddee_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
saves_per_epoch: null
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ce748d17-c160-411d-84ac-e3efbcca61b8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ce748d17-c160-411d-84ac-e3efbcca61b8
warmup_steps: 100
weight_decay: 0.05
xformers_attention: null
```
</details><br>
# 969a6c78-290a-48f6-a49e-95855e666555
This model is a fine-tuned version of [NousResearch/Genstruct-7B](https://huggingface.co/NousResearch/Genstruct-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0056
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 0.9089 |
| 0.1634 | 0.0211 | 100 | 0.0352 |
| 0.063 | 0.0422 | 200 | 0.0056 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Jonjew/PascalBlancheStyle | Jonjew | 2025-03-08T05:19:21Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] | text-to-image | 2025-03-08T05:19:14Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
By Passcal Blanché. A highly stylized CGI illustration in a futuristic
sci-fi aesthetic. The central figure is a muscular and athletic female
warrior, wearing a red jumpsuit that accentuates her curves. Her skin is a
shiny metallic blue and her long, flowing hair is also blue, matching the
metallic hue. She wears a skull-shaped helmet that covers much of her face.
The skull-like appearance that caps her head with a prominent upper jaw,
gives her a menacing appearance.
output:
url: images/_00000_7_.png
- text: >-
By Passcal Blanché. A digital artwork depicting a fantastical, ethereal
scene. At the center is a nude female figure with a pale, almost translucent
skin tone. She has long, flowing black hair that obscures her face, giving
her a mysterious and otherworldly appearance. Her body is slender yet
muscular, with pronounced abs and defined limbs. She is seated on a large,
ornate, red stone pedestal that features intricate, abstract carvings of
mythical beasts.
output:
url: images/_00000_27_.png
- text: >-
By Passcal Blanché. A digitally created artwork in a surreal and cyberpunk
style. The central figure is a woman with a pale, smooth complexion and
long, flowing hair, wearing a futuristic, form-fitting outfit that
accentuates her curvy figure. She sits cross-legged against a plain beige
background and her Japanese-inspired outfit includes a high-necked metallic
bodice that reveals her ample bosom and tight, high-waisted pants. Her face
is made up in a geisha style with red stripes over her eyes and mouth,
adding a striking contrast to her otherwise neutral makeup.
output:
url: images/_00000_21_.png
- text: >-
By Passcal Blanché. A digitally created artwork in a surrealist style. It
depicts an androgynous figure with blue skin, long flowing black hair, and a
muscular physique. The figure is crouching and wearing a large chain on one
of its arms. The figure wears wings resembling those of a dragonfly or
butterfly, which are transparent with a teal tint. It also wears accessories
that resemble bones as well as a small skull at the waist giving a tribal
appeal. The figure's face is serene, its head is tilted slightly to the side
and its eyes are focused on something in the distance.
output:
url: images/_00000_32_.png
- text: >-
By Passcal Blanché. A digital illustration in a fantasy art style. The
central figure is a fearsome, mythical creature, resembling a hybrid of a
snake and a human, known as Medusa. Medusa has the head of a woman with
flowing dark green hair and a fierce expression. Her body is serpentine,
with scales that shimmer in shades of green and brown. She holds a trident
in her right hand, and a severed head in her left hand, which she holds
aloft.
output:
url: images/_00000_46_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: By Passcal Blanché
license: unknown
---
# Pascal Blanché
<Gallery />
## Model description
FROM https://civitai.com/models/1285926/pascal-blanche?modelVersionId=1274884
Trigger By Passcal Blanché
Strength 0.8 - 1.2
About this version
Inspired by Pascal Blanché artwork (one of my favorites).
Recommended resources : Fluxmania III or Flux1.dev fp8.
Settings : dpmpp_2m / sgm_uniform / 25 - 30 steps / cfg 3.5
Weighting : 0.8 - 1.2
## Trigger words
You should use `By Passcal Blanché` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/PascalBlancheStyle/tree/main) them in the Files & versions tab.
|
nt-ai/whisper-small-bn | nt-ai | 2025-03-08T05:13:35Z | 45 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"bn",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-02-17T07:55:49Z | ---
library_name: transformers
language:
- bn
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small Bengali - Nripen Tudu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Bengali - Nripen Tudu
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1583
- eval_wer: 47.3416
- eval_runtime: 7484.6397
- eval_samples_per_second: 1.116
- eval_steps_per_second: 0.14
- epoch: 0.7626
- step: 800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.2
- Tokenizers 0.21.0
|
eastcourt/distilbert-base-uncased-finetuned-cola | eastcourt | 2025-03-08T05:13:02Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-08T05:12:49Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8251
- Matthews Correlation: 0.5570
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5209 | 1.0 | 535 | 0.4635 | 0.4830 |
| 0.3506 | 2.0 | 1070 | 0.4708 | 0.5339 |
| 0.2351 | 3.0 | 1605 | 0.6342 | 0.5331 |
| 0.1735 | 4.0 | 2140 | 0.7744 | 0.5456 |
| 0.126 | 5.0 | 2675 | 0.8251 | 0.5570 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 2.21.0
- Tokenizers 0.21.0
|
prithivMLmods/Sombrero-Opus-14B-Sm3 | prithivMLmods | 2025-03-08T05:12:34Z | 0 | 3 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"StreamlinedMemory",
"Math",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-06T03:20:29Z | ---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-14B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- StreamlinedMemory
- Math
---

# **Sombrero-Opus-14B-Sm3**
Sombrero-Opus-14B-Sm3 is based on the Qwen 2.5 14B modality architecture, designed to enhance coding efficiency and computational reasoning. This model is optimized for streamlined memory usage, avoiding unwanted textual token generation, and excelling in coding, explanatory reasoning, mathematical problem-solving, and technical tasks. It has been fine-tuned using specialized datasets to improve code generation, structured programming logic, and problem-solving capabilities.
## **Key Improvements**
1. **Optimized for Coding**: The model specializes in generating high-quality, structured code with minimal redundant tokens, ensuring efficient execution.
2. **Enhanced Memory Utilization**: Implements streamlined memory optimization to reduce computational overhead and improve performance.
3. **Superior Reasoning Capabilities**: Excels in solving complex mathematical and algorithmic problems with logical and structured explanations.
4. **Long-Context Support**: Supports up to 128K tokens for input context and can generate up to 8K tokens in a single output, making it ideal for detailed coding responses.
5. **Reduced Unwanted Textual Tokens**: Ensures a more focused output for coding tasks by minimizing excessive textual responses.
## **Quickstart with transformers**
Here is a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and generate content:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Sombrero-Opus-14B-Sm3"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Write a Python function to find the Fibonacci sequence."
messages = [
{"role": "system", "content": "You are an advanced coding assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
## **Intended Use**
1. **Code Generation & Optimization**:
Designed for developers, assisting in writing, refactoring, and optimizing code across multiple programming languages.
2. **Algorithm & Mathematical Problem Solving**:
Provides precise explanations and solutions for computational and mathematical problems.
3. **Technical Explanations & Documentation**:
Generates clear and structured explanations for coding concepts, libraries, and APIs.
4. **Debugging Assistance**:
Helps analyze code snippets, detect errors, and suggest corrections.
5. **Educational Use**:
Assists students and learners by breaking down complex programming topics into easily understandable sections.
6. **Structured Data Processing**:
Capable of analyzing and generating structured outputs, such as JSON, XML, and tables, making it ideal for data science applications.
## **Limitations**
1. **Hardware Requirements**:
Requires high-memory GPUs or TPUs due to its large parameter size and long-context support.
2. **Potential Bias in Responses**:
While designed to be neutral, outputs may still reflect biases present in training data.
3. **Inconsistent Outputs in Creative Tasks**:
May produce variable results in storytelling and non-technical topics.
4. **Limited Real-World Awareness**:
Does not have access to real-time events beyond its training cutoff.
5. **Error Propagation in Extended Outputs**:
Minor errors in early responses may affect overall coherence in long-form code outputs.
6. **Prompt Sensitivity**:
The effectiveness of responses may depend on how well the input prompt is structured. |
mradermacher/OpenR1-Qwen-7B-SFT2-GGUF | mradermacher | 2025-03-08T05:12:15Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ZMC2019/OpenR1-Qwen-7B-SFT2",
"base_model:quantized:ZMC2019/OpenR1-Qwen-7B-SFT2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-08T04:39:12Z | ---
base_model: ZMC2019/OpenR1-Qwen-7B-SFT2
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ZMC2019/OpenR1-Qwen-7B-SFT2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenR1-Qwen-7B-SFT2-GGUF/resolve/main/OpenR1-Qwen-7B-SFT2.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenR1-Qwen-7B-SFT2-GGUF/resolve/main/OpenR1-Qwen-7B-SFT2.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/OpenR1-Qwen-7B-SFT2-GGUF/resolve/main/OpenR1-Qwen-7B-SFT2.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenR1-Qwen-7B-SFT2-GGUF/resolve/main/OpenR1-Qwen-7B-SFT2.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/OpenR1-Qwen-7B-SFT2-GGUF/resolve/main/OpenR1-Qwen-7B-SFT2.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenR1-Qwen-7B-SFT2-GGUF/resolve/main/OpenR1-Qwen-7B-SFT2.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenR1-Qwen-7B-SFT2-GGUF/resolve/main/OpenR1-Qwen-7B-SFT2.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenR1-Qwen-7B-SFT2-GGUF/resolve/main/OpenR1-Qwen-7B-SFT2.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenR1-Qwen-7B-SFT2-GGUF/resolve/main/OpenR1-Qwen-7B-SFT2.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/OpenR1-Qwen-7B-SFT2-GGUF/resolve/main/OpenR1-Qwen-7B-SFT2.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OpenR1-Qwen-7B-SFT2-GGUF/resolve/main/OpenR1-Qwen-7B-SFT2.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/OpenR1-Qwen-7B-SFT2-GGUF/resolve/main/OpenR1-Qwen-7B-SFT2.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
texanrangee/7c211d71-28c5-45b5-94b2-5d1adcebf91e | texanrangee | 2025-03-08T05:10:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-08T04:37:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
omarkhx/omar-first | omarkhx | 2025-03-08T05:09:14Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-03-08T05:09:14Z | ---
license: apache-2.0
---
|
mradermacher/Llama3.2_3B_Reasoning_V2-GGUF | mradermacher | 2025-03-08T05:07:25Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"base_model:Aditya0619/Llama3.2_3B_Reasoning_V2",
"base_model:quantized:Aditya0619/Llama3.2_3B_Reasoning_V2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-08T04:31:12Z | ---
base_model: Aditya0619/Llama3.2_3B_Reasoning_V2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Aditya0619/Llama3.2_3B_Reasoning_V2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3.2_3B_Reasoning_V2-GGUF/resolve/main/Llama3.2_3B_Reasoning_V2.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2_3B_Reasoning_V2-GGUF/resolve/main/Llama3.2_3B_Reasoning_V2.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2_3B_Reasoning_V2-GGUF/resolve/main/Llama3.2_3B_Reasoning_V2.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2_3B_Reasoning_V2-GGUF/resolve/main/Llama3.2_3B_Reasoning_V2.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2_3B_Reasoning_V2-GGUF/resolve/main/Llama3.2_3B_Reasoning_V2.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2_3B_Reasoning_V2-GGUF/resolve/main/Llama3.2_3B_Reasoning_V2.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2_3B_Reasoning_V2-GGUF/resolve/main/Llama3.2_3B_Reasoning_V2.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2_3B_Reasoning_V2-GGUF/resolve/main/Llama3.2_3B_Reasoning_V2.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2_3B_Reasoning_V2-GGUF/resolve/main/Llama3.2_3B_Reasoning_V2.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2_3B_Reasoning_V2-GGUF/resolve/main/Llama3.2_3B_Reasoning_V2.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2_3B_Reasoning_V2-GGUF/resolve/main/Llama3.2_3B_Reasoning_V2.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2_3B_Reasoning_V2-GGUF/resolve/main/Llama3.2_3B_Reasoning_V2.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Qwen2.5_3B-GRPO-medical-reasoning-GGUF | mradermacher | 2025-03-08T05:03:17Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"trl",
"grpo",
"en",
"base_model:m1n9x/Qwen2.5_3B-GRPO-medical-reasoning",
"base_model:quantized:m1n9x/Qwen2.5_3B-GRPO-medical-reasoning",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-08T04:21:31Z | ---
base_model: m1n9x/Qwen2.5_3B-GRPO-medical-reasoning
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- unsloth
- trl
- grpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/m1n9x/Qwen2.5_3B-GRPO-medical-reasoning
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5_3B-GRPO-medical-reasoning-GGUF/resolve/main/Qwen2.5_3B-GRPO-medical-reasoning.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5_3B-GRPO-medical-reasoning-GGUF/resolve/main/Qwen2.5_3B-GRPO-medical-reasoning.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5_3B-GRPO-medical-reasoning-GGUF/resolve/main/Qwen2.5_3B-GRPO-medical-reasoning.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5_3B-GRPO-medical-reasoning-GGUF/resolve/main/Qwen2.5_3B-GRPO-medical-reasoning.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5_3B-GRPO-medical-reasoning-GGUF/resolve/main/Qwen2.5_3B-GRPO-medical-reasoning.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5_3B-GRPO-medical-reasoning-GGUF/resolve/main/Qwen2.5_3B-GRPO-medical-reasoning.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5_3B-GRPO-medical-reasoning-GGUF/resolve/main/Qwen2.5_3B-GRPO-medical-reasoning.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5_3B-GRPO-medical-reasoning-GGUF/resolve/main/Qwen2.5_3B-GRPO-medical-reasoning.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5_3B-GRPO-medical-reasoning-GGUF/resolve/main/Qwen2.5_3B-GRPO-medical-reasoning.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5_3B-GRPO-medical-reasoning-GGUF/resolve/main/Qwen2.5_3B-GRPO-medical-reasoning.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5_3B-GRPO-medical-reasoning-GGUF/resolve/main/Qwen2.5_3B-GRPO-medical-reasoning.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5_3B-GRPO-medical-reasoning-GGUF/resolve/main/Qwen2.5_3B-GRPO-medical-reasoning.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
rmikeyjohnson314/CohortAI | rmikeyjohnson314 | 2025-03-08T04:56:03Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-03-08T04:56:03Z | ---
license: apache-2.0
---
|
CompassioninMachineLearning/20K_mixed_15k_animals_march7_strict_llama_chat_prompts | CompassioninMachineLearning | 2025-03-08T04:53:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-08T04:49:24Z | ---
base_model: unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** JoyeeChen
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
WPRM/policy-bid-text-epoch5-1e-5 | WPRM | 2025-03-08T04:51:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B-Instruct-AWQ",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct-AWQ",
"region:us"
] | null | 2025-03-08T04:51:31Z | ---
base_model: Qwen/Qwen2.5-3B-Instruct-AWQ
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
anwitac795/LunarLander-v2 | anwitac795 | 2025-03-08T04:43:42Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-03-08T04:43:30Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.27 +/- 25.01
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
zhangthree/fortunetelling | zhangthree | 2025-03-08T04:43:32Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"zh",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-08T04:13:35Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
- zh
---
# Uploaded model
- **Developed by:** zhangthree
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
Jonjew/CraigHannaStyle | Jonjew | 2025-03-08T04:42:27Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] | text-to-image | 2025-03-08T04:42:19Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
By Craig Hanna. A portrait of a woman with an expressive expression, painted
in a realistic style with a focus on texture and color. the woman is
positioned in the middle of the image, with her upper body facing the
viewer. she appears to be in her mid-twenties, with dark brown hair styled
in a messy bun on the left side of her face. her eyes are a bright orange,
and she has a serene and contemplative expression. her lips are slightly
parted, as if she is about to say something. she is wearing a white shirt,
and her hands are clasped together in front of her neck. the background is a
soft gradient of light purple and white, with splashes of green and brown,
creating a sense of movement and energy. the style is reminiscent of
contemporary art, with bold lines and vibrant colors that bring the subject
to life.
parameters:
negative_prompt: 'Steps: 25 Seed: 798538151615720'
output:
url: images/SC_00063_.png
- text: >-
By Craig Hanna. A realistic digital painting of a woman standing in a
contemplative pose, wearing a long, white dress with intricate patterns and
a matching shawl draped over her left shoulder. the woman, who appears to be
in her late 20s or early 30s, has dark skin, dark hair tied in a bun, and a
serious expression. she is standing in the middle of the image, facing away
from the viewer, with her full body visible. the background is a dark,
textured wall, and the floor is made of light-colored wood. the lighting is
soft and diffused, casting gentle shadows on the woman's face and body. the
style is realistic with a touch of realism, featuring detailed textures and
a muted color palette.
parameters:
negative_prompt: 'Steps: 25 Seed: 12833010338283'
output:
url: images/SC_00021_.png
- text: >-
By Craig Hanna. A painting of a nude woman with a muscular physique, sitting
at a table with his back turned towards the viewer. the woman has a black
hair tied in a ponytail, and is wearing a green and red checkered cloth
draped over his lap. his body is slim and muscular, with a focus on his back
and shoulders. he is facing away from the viewer, with his hands resting on
the table. the background is a simple, neutral-colored wall with a yellow
pillow on the right side. the painting is done in a realistic style with a
mix of warm tones and textures, creating a sense of intimacy and closeness.
parameters:
negative_prompt: 'Steps: 25 Seed: 416661955050907'
output:
url: images/SC_00061_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: By Craig Hanna
license: unknown
---
# Craig Hanna
<Gallery />
## Model description
FROM https://civitai.com/models/1287715/craig-hanna?modelVersionId=1452941
Trigger By Craig Hanna
Strength 1
About this version
Model inspired by Craig Hanna artwork.
Trained on Civitai with a dataset of 46 images
Recommended resources : Fluxmania III
Recommended settings : dpmpp_2m / sgm_uniform / 25 steps / flux guidance : 3.5
Weighting : 1.0
## Trigger words
You should use `By Craig Hanna` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/CraigHannaStyle/tree/main) them in the Files & versions tab.
|
genki10/ASAP_nosemanticV2_FineTuningBERT_AugV12_k5_task1_organization_k5_k5_fold4 | genki10 | 2025-03-08T04:41:26Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-08T04:00:14Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_nosemanticV2_FineTuningBERT_AugV12_k5_task1_organization_k5_k5_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_nosemanticV2_FineTuningBERT_AugV12_k5_task1_organization_k5_k5_fold4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7084
- Qwk: 0.4859
- Mse: 0.7084
- Rmse: 0.8417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 5 | 6.4415 | 0.0001 | 6.4415 | 2.5380 |
| No log | 2.0 | 10 | 3.5960 | 0.0079 | 3.5960 | 1.8963 |
| No log | 3.0 | 15 | 2.0749 | 0.1307 | 2.0749 | 1.4404 |
| No log | 4.0 | 20 | 1.2818 | 0.0509 | 1.2818 | 1.1322 |
| No log | 5.0 | 25 | 1.5192 | 0.1236 | 1.5192 | 1.2326 |
| No log | 6.0 | 30 | 1.1616 | 0.1964 | 1.1616 | 1.0778 |
| No log | 7.0 | 35 | 1.1014 | 0.2685 | 1.1014 | 1.0495 |
| No log | 8.0 | 40 | 0.7682 | 0.4501 | 0.7682 | 0.8765 |
| No log | 9.0 | 45 | 1.5972 | 0.2671 | 1.5972 | 1.2638 |
| No log | 10.0 | 50 | 0.8350 | 0.4151 | 0.8350 | 0.9138 |
| No log | 11.0 | 55 | 0.6592 | 0.4850 | 0.6592 | 0.8119 |
| No log | 12.0 | 60 | 1.2997 | 0.3583 | 1.2997 | 1.1400 |
| No log | 13.0 | 65 | 0.8738 | 0.4252 | 0.8738 | 0.9348 |
| No log | 14.0 | 70 | 0.9234 | 0.4575 | 0.9234 | 0.9609 |
| No log | 15.0 | 75 | 0.8433 | 0.4523 | 0.8433 | 0.9183 |
| No log | 16.0 | 80 | 1.0331 | 0.4008 | 1.0331 | 1.0164 |
| No log | 17.0 | 85 | 0.8675 | 0.4715 | 0.8675 | 0.9314 |
| No log | 18.0 | 90 | 1.3914 | 0.3609 | 1.3914 | 1.1796 |
| No log | 19.0 | 95 | 1.1679 | 0.3914 | 1.1679 | 1.0807 |
| No log | 20.0 | 100 | 0.9148 | 0.4106 | 0.9148 | 0.9564 |
| No log | 21.0 | 105 | 0.7493 | 0.4598 | 0.7493 | 0.8656 |
| No log | 22.0 | 110 | 0.8396 | 0.4606 | 0.8396 | 0.9163 |
| No log | 23.0 | 115 | 0.9157 | 0.4431 | 0.9157 | 0.9569 |
| No log | 24.0 | 120 | 0.8621 | 0.4384 | 0.8621 | 0.9285 |
| No log | 25.0 | 125 | 0.9933 | 0.4326 | 0.9933 | 0.9967 |
| No log | 26.0 | 130 | 0.7497 | 0.5092 | 0.7497 | 0.8658 |
| No log | 27.0 | 135 | 1.0817 | 0.4061 | 1.0817 | 1.0401 |
| No log | 28.0 | 140 | 0.7755 | 0.4526 | 0.7755 | 0.8806 |
| No log | 29.0 | 145 | 0.8250 | 0.4694 | 0.8250 | 0.9083 |
| No log | 30.0 | 150 | 0.9010 | 0.4517 | 0.9010 | 0.9492 |
| No log | 31.0 | 155 | 0.9045 | 0.4566 | 0.9045 | 0.9511 |
| No log | 32.0 | 160 | 0.8956 | 0.4382 | 0.8956 | 0.9463 |
| No log | 33.0 | 165 | 0.7899 | 0.4306 | 0.7899 | 0.8888 |
| No log | 34.0 | 170 | 0.7645 | 0.4310 | 0.7645 | 0.8743 |
| No log | 35.0 | 175 | 1.0332 | 0.4153 | 1.0332 | 1.0165 |
| No log | 36.0 | 180 | 0.7561 | 0.4338 | 0.7561 | 0.8696 |
| No log | 37.0 | 185 | 0.7050 | 0.4536 | 0.7050 | 0.8397 |
| No log | 38.0 | 190 | 1.1110 | 0.4055 | 1.1110 | 1.0540 |
| No log | 39.0 | 195 | 0.7175 | 0.4637 | 0.7175 | 0.8471 |
| No log | 40.0 | 200 | 0.8152 | 0.4596 | 0.8152 | 0.9029 |
| No log | 41.0 | 205 | 0.7787 | 0.4781 | 0.7787 | 0.8824 |
| No log | 42.0 | 210 | 0.6487 | 0.5117 | 0.6487 | 0.8054 |
| No log | 43.0 | 215 | 0.8734 | 0.4426 | 0.8734 | 0.9346 |
| No log | 44.0 | 220 | 0.6645 | 0.5053 | 0.6645 | 0.8152 |
| No log | 45.0 | 225 | 0.7868 | 0.4952 | 0.7868 | 0.8870 |
| No log | 46.0 | 230 | 0.8741 | 0.4593 | 0.8741 | 0.9349 |
| No log | 47.0 | 235 | 0.6701 | 0.5088 | 0.6701 | 0.8186 |
| No log | 48.0 | 240 | 1.0190 | 0.4121 | 1.0190 | 1.0095 |
| No log | 49.0 | 245 | 0.6543 | 0.5173 | 0.6543 | 0.8089 |
| No log | 50.0 | 250 | 0.7958 | 0.4730 | 0.7958 | 0.8921 |
| No log | 51.0 | 255 | 0.7281 | 0.4919 | 0.7281 | 0.8533 |
| No log | 52.0 | 260 | 0.6535 | 0.5265 | 0.6535 | 0.8084 |
| No log | 53.0 | 265 | 0.7914 | 0.4533 | 0.7914 | 0.8896 |
| No log | 54.0 | 270 | 0.6916 | 0.4868 | 0.6916 | 0.8316 |
| No log | 55.0 | 275 | 0.6915 | 0.5032 | 0.6915 | 0.8315 |
| No log | 56.0 | 280 | 0.8551 | 0.4545 | 0.8551 | 0.9247 |
| No log | 57.0 | 285 | 0.7546 | 0.4753 | 0.7546 | 0.8687 |
| No log | 58.0 | 290 | 0.6653 | 0.5059 | 0.6653 | 0.8157 |
| No log | 59.0 | 295 | 0.8104 | 0.4539 | 0.8104 | 0.9002 |
| No log | 60.0 | 300 | 0.7595 | 0.4712 | 0.7595 | 0.8715 |
| No log | 61.0 | 305 | 0.6900 | 0.4928 | 0.6900 | 0.8307 |
| No log | 62.0 | 310 | 0.7538 | 0.4829 | 0.7538 | 0.8682 |
| No log | 63.0 | 315 | 0.6874 | 0.4860 | 0.6874 | 0.8291 |
| No log | 64.0 | 320 | 0.6741 | 0.5139 | 0.6741 | 0.8210 |
| No log | 65.0 | 325 | 0.6863 | 0.5143 | 0.6863 | 0.8284 |
| No log | 66.0 | 330 | 0.6944 | 0.5087 | 0.6944 | 0.8333 |
| No log | 67.0 | 335 | 0.7359 | 0.4666 | 0.7359 | 0.8579 |
| No log | 68.0 | 340 | 0.6938 | 0.5014 | 0.6938 | 0.8330 |
| No log | 69.0 | 345 | 0.6738 | 0.5180 | 0.6738 | 0.8209 |
| No log | 70.0 | 350 | 0.6574 | 0.5327 | 0.6574 | 0.8108 |
| No log | 71.0 | 355 | 0.6721 | 0.5191 | 0.6721 | 0.8198 |
| No log | 72.0 | 360 | 0.6284 | 0.5288 | 0.6284 | 0.7927 |
| No log | 73.0 | 365 | 0.7548 | 0.4998 | 0.7548 | 0.8688 |
| No log | 74.0 | 370 | 0.6402 | 0.5253 | 0.6402 | 0.8001 |
| No log | 75.0 | 375 | 0.7444 | 0.4907 | 0.7444 | 0.8628 |
| No log | 76.0 | 380 | 0.6742 | 0.5121 | 0.6742 | 0.8211 |
| No log | 77.0 | 385 | 0.6737 | 0.5222 | 0.6737 | 0.8208 |
| No log | 78.0 | 390 | 0.7162 | 0.5055 | 0.7162 | 0.8463 |
| No log | 79.0 | 395 | 0.7296 | 0.4993 | 0.7296 | 0.8542 |
| No log | 80.0 | 400 | 0.6687 | 0.5203 | 0.6687 | 0.8177 |
| No log | 81.0 | 405 | 0.6535 | 0.5148 | 0.6535 | 0.8084 |
| No log | 82.0 | 410 | 0.7062 | 0.4835 | 0.7062 | 0.8404 |
| No log | 83.0 | 415 | 0.6591 | 0.5272 | 0.6591 | 0.8119 |
| No log | 84.0 | 420 | 0.6454 | 0.5051 | 0.6454 | 0.8033 |
| No log | 85.0 | 425 | 0.6927 | 0.4960 | 0.6927 | 0.8323 |
| No log | 86.0 | 430 | 0.6659 | 0.5157 | 0.6659 | 0.8160 |
| No log | 87.0 | 435 | 0.6846 | 0.4952 | 0.6846 | 0.8274 |
| No log | 88.0 | 440 | 0.7172 | 0.4989 | 0.7172 | 0.8469 |
| No log | 89.0 | 445 | 0.6771 | 0.5217 | 0.6771 | 0.8228 |
| No log | 90.0 | 450 | 0.7084 | 0.4859 | 0.7084 | 0.8417 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
benemreseker/yenimodeldeneme | benemreseker | 2025-03-08T04:41:18Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-03-08T04:04:11Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
FuseAI/FuseChat-Llama-3.1-8B-Instruct | FuseAI | 2025-03-08T04:41:10Z | 197 | 10 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:FuseAI/FuseChat-3.0-DPO-Data",
"arxiv:2412.03187",
"arxiv:2503.04222",
"arxiv:2408.07990",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-20T11:02:51Z | ---
datasets:
- FuseAI/FuseChat-3.0-DPO-Data
model-index:
- name: FuseChat-Llama-3.1-8B-Instruct
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: wis-k/instruction-following-eval
split: train
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 72.05
name: averaged accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=FuseAI%2FFuseChat-Llama-3.1-8B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: SaylorTwift/bbh
split: test
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 30.85
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=FuseAI%2FFuseChat-Llama-3.1-8B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: lighteval/MATH-Hard
split: test
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 7.02
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=FuseAI%2FFuseChat-Llama-3.1-8B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
split: train
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 7.38
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=FuseAI%2FFuseChat-Llama-3.1-8B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 6.15
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=FuseAI%2FFuseChat-Llama-3.1-8B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 30.37
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=FuseAI%2FFuseChat-Llama-3.1-8B-Instruct
name: Open LLM Leaderboard
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
---
<p align="center" width="100%">
</p>
<div id="top" align="center">
FuseChat-3.0: Preference Optimization for Implicit Model Fusion
-----------------------------
<h4> |<a href="https://arxiv.org/abs/2412.03187"> 📑 WRPO Paper </a> |
<a href="https://arxiv.org/pdf/2503.04222"> 📑 FuseChat-3.0 Paper </a> |
<a href="https://github.com/SLIT-AI/FuseChat-3.0"> 🐱 GitHub Repo </a> |
<a href="https://huggingface.co/FuseAI"> 🤗 Hugging Face </a> |
<a href="https://slit-ai.github.io/FuseChat-3.0/"> 🌐 Website </a> |
</h4>
</div>
<div align="center">
<img src="FuseChat-3.0.png" width=70%/>
</div>
We present FuseChat-3.0, a series of models crafted to enhance performance by integrating the strengths of multiple source LLMs into more compact target LLMs. To achieve this fusion, we utilized four powerful source LLMs: Gemma-2-27B-It, Mistral-Large-Instruct-2407, Qwen-2.5-72B-Instruct, and Llama-3.1-70B-Instruct. For the target LLMs, we employed three widely-used smaller models—Llama-3.1-8B-Instruct, Gemma-2-9B-It, and Qwen-2.5-7B-Instruct—along with two even more compact models—Llama-3.2-3B-Instruct and Llama-3.2-1B-Instruct. The implicit model fusion process involves a two-stage training pipeline comprising Supervised Fine-Tuning (SFT) to mitigate distribution discrepancies between target and source LLMs, and Direct Preference Optimization (DPO) for learning preferences from multiple source LLMs. The resulting FuseChat-3.0 models demonstrated substantial improvements in tasks related to general conversation, instruction following, mathematics, and coding. Notably, when Llama-3.1-8B-Instruct served as the target LLM, our fusion approach achieved an average improvement of 6.8 points across 14 benchmarks. Moreover, it showed significant improvements of 37.1 and 30.1 points on instruction-following test sets AlpacaEval-2 and Arena-Hard respectively. We have released the [FuseChat-3.0](https://huggingface.co/FuseAI) models and datasets on Huggingface.
## Overview
Combining the strengths of multiple large language models (LLMs) represents a promising approach to enhance individual model capabilities. Model fusion is a technique that integrates the strengths of robust source LLMs into a target LLM.
Previous iterations of the [FuseChat](https://arxiv.org/abs/2408.07990) series employed probabilistic distribution matrices generated by source models to transfer knowledge to target models. We refer to this method as **explicit model fusion (EMF)** because it involves a well-defined knowledge transfer process. While applicable to models with varying architectures and sizes, and without increasing memory overhead during inference, this approach presents notable challenges such as vocabulary alignment and the merging of distribution matrices from different LLMs. These issues complicate model fusion, reduce its efficiency, and may introduce noise and errors and affect the fusion results.
FuseChat-3.0, however, takes a different approach by enhancing a single LLM through implicit learning from robust open-source LLMs, a process we term **implicit model fusion (IMF)**. The concept of IMF has been widely utilized to improve the performance of weaker models. For instance, a weak model can be boosted through fine-tuning with outputs from stronger LLMs. Moreover, a reward model can be trained using outputs from various LLMs, enabling it to learn and capture the differences in capabilities between the LLMs. Zephyr further collects responses from multiple LLMs and ranks them with GPT-4 to obtain preference data for training the policy. Inspired by recent alignment techniques, we propose an IMF method to transfer the capabilities of source LLMs to a target LLM through preference optimization.
Our IMF method follows a three-stage process aimed at effectively transferring capabilities from source LLMs to a target LLM. First, during **dataset construction**, we sample N responses from each of the source LLMs and annotate these responses using an external reward model. Second, in the **supervised fine-tuning (SFT)** stage, we fine-tune the target model using the best responses, which not only enhances the target model's capabilities but also helps mitigate the distributional gap between the source and target models. Finally, in the **direct preference optimization (DPO)** stage, we optimize the target model by using the best and worst responses from the source models as preference pairs, further enhancing the target model's performance. The complete pipeline will be detailed in the following paragraph.
## Dataset
### Prompt Selection
Our datasets were designed to enhance model's instruction following, general conversation, mathematics, coding, and Chinese-language capabilities. We selected data from open-source community datasets, applying targeted filtering and preprocessing. Key datasets and filtering criteria included:
- **Instruction Following & General Conversation**: Sourced from [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback), [Magpie-Pro-DPO-100K-v0.1](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-DPO-100K-v0.1), and [HelpSteer2](https://huggingface.co/datasets/nvidia/HelpSteer2), excluding code and math data.
- **Mathematics**: Selected from [OpenMathInstruct-2](https://huggingface.co/datasets/nvidia/OpenMathInstruct-2), with nearly 52,000 unique samples.
- **Coding**: Curated from [leetcode](https://huggingface.co/datasets/greengerong/leetcode) and [self-oss-instruct-sc2-exec-filter-50k](https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k), retaining prompts with test cases.
- **Chinese Language**: Integrated [alpaca_gpt4_zh](https://huggingface.co/datasets/llamafactory/alpaca_gpt4_zh) and [Magpie-Qwen2-Pro-200K-Chinese](https://huggingface.co/datasets/Magpie-Align/Magpie-Qwen2-Pro-200K-Chinese), filtering out code and math prompts to retain approximately 10,000 high-quality samples.
### Response Sampling
For each dataset's prompts, we synthesized responses mainly from four different series of source models, specifically [Gemma-2-27b-It](https://huggingface.co/google/gemma-2-27b-it), [Mistral-Large-Instruct-2407](https://huggingface.co/mistralai/Mistral-Large-Instruct-2407), [Qwen-2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct), and [Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct).
- **Instruction Following & General Conversation**: We sampled each prompt five times from all the source models.
- **Mathematics**: We retained the responses generated by Llama-3.1-405B-Instruct from the original dataset (OpenMathInstruct-2) and additionally sampled responses using [Qwen-2.5-Math-72B-Instruct](https://huggingface.co/Qwen/Qwen-2.5-Math-72B-Instruct).
- **Coding**: We sampled each prompt eight times for all source models.
- **Chinese Language**: We included single response sampled exclusively from Qwen-2.5-72B-Instruct.
The sampling parameters for different models are detailed in Table below.
<table class="js-sort-table table hidden">
<tr>
<td class="js-sort-string"><strong>Source LLMs</strong></td>
<td class="js-sort-string"><strong>Sampling Params</strong></td>
</tr>
<tr>
<td>Gemma-2-27b-It</td>
<td>Temp 0.8 Top-p 0.95</td>
</tr>
<tr>
<td>Mistral-Large-Instruct-2407</td>
<td>Temp 0.8 Top-p 0.95</td>
</tr>
<tr>
<td>Qwen-2.5-(Math)-72B-Instruct</td>
<td>Temp 0.7 Top-p 0.8 Repetition penalty 1.05</td>
</tr>
<tr>
<td>Llama-3.1-70B-Instruct</td>
<td>Temp 0.8 Top-p 0.95</td>
</tr>
</table>
### Data Construction
Unlike the original approach in [WRPO](https://arxiv.org/abs/2412.03187), which constructs preference pairs from target model responses and treats source model responses as additional positive samples, our research in mathematics and coding domains revealed that sampling from multiple source models yields more and higher-quality preference pair data. Based on this insight, FuseChat-3.0 leverages the best and worst response pairs generated by source models as preference pairs to optimize the target model. This refined approach not only preserves the core advantages of implicit model fusion but also results in a more streamlined and practical implementation, making it particularly well-suited for real-world applications within the open-source community.
- **Instruction Following**: To assign RM scores to the five responses generated by each source model, we employed [ArmoRM](https://huggingface.co/RLHFlow/ArmoRM-Llama3-8B-v0.1) for annotation. We then divided the annotated data into SFT and DPO datasets using a 4:6 ratio. For the SFT phase, we selected the responses with the highest RM scores. During the DPO phase, we paired responses from the same source model, designating those with the highest RM scores as positive samples and those with the lowest RM scores as negative samples. We ensured that the RM score difference between the positive and negative samples in each pair ranged from 0.01 to 0.1.
- **Mathematics**: We initially annotated the responses from all source models for correctness by comparing them with the gold labels and evaluating them using the RM scores provided by ArmoRM. We then strategically divided the dataset into SFT phase and DPO phase. In the SFT phase, we incorporated responses that were correct and had the highest RM scores. This selection ensured that the fine-tuning process was based on high-quality responses that aligned closely with the desired outcomes. For the DPO phase, we constructed paired samples from the same source model. The positive samples consisted of correct answers with the highest RM scores, while the negative samples were incorrect answers with the lowest RM scores. To ensure meaningful comparisons during optimization, we maintained an RM score differential between positive and negative pairs within the range of 0.01 to 0.1.
- **Coding**: We employed a dual-scoring system comprising correctness scores and RM scores for coding evaluation. The correctness scores assessed whether the code passed both static analysis and test cases, ensuring functional accuracy. The RM scores were used for preference evaluation, gauging the quality of responses based on predefined criteria. During the SFT phase, we included responses that not only passed all test cases but also achieved the highest RM scores. This selection ensured that the model was fine-tuned on exemplary code that met both correctness and preference standards. In the DPO phase, we contrasted positive samples—high-scoring responses that passed the tests—with negative samples—low-scoring responses that failed the tests. This comparison aimed to optimize the model's ability to prefer higher-quality code during training. We excluded any instances where all model responses failed to meet the testing criteria. This exclusion was necessary to maintain the integrity of the evaluation process, as such cases did not provide meaningful data for assessing and improving the model's performance.
- **Chinese**: We exclusively utilized responses sampled from Qwen-2.5-72B-Instruct during the SFT phase, due to its strong performance in the Chinese language.
Our final dataset comprised 158,667 total entries, with 94,539 entries for the SFT phase and 64,128 preference pairs for the DPO phase. The overall composition of the datasets is shown below.
<table class="js-sort-table table hidden">
<tr>
<td class="js-sort-string"><strong>Dataset</strong></td>
<td class="js-sort-number"><strong>Total Count</strong></td>
<td class="js-sort-number"><strong>SFT Count</strong></td>
<td class="js-sort-number"><strong>DPO Count</strong></td>
<td class="js-sort-string"><strong>Category</strong></td>
</tr>
<tr>
<td><a href="https://huggingface.co/datasets/openbmb/UltraFeedback" target="_blank">UltraFeedback</a></td>
<td>51098</td>
<td>20439</td>
<td>30659</td>
<td>Instruction following</td>
</tr>
<tr>
<td><a href="https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-DPO-100K-v0.1" target="_blank">Magpie-Pro-DPO</a></td>
<td>20374</td>
<td>8149</td>
<td>12225</td>
<td>Instruction following</td>
</tr>
<tr>
<td><a href="https://huggingface.co/datasets/nvidia/HelpSteer2" target="_blank">HelpSteer2</a></td>
<td>9435</td>
<td>3774</td>
<td>5661</td>
<td>Instruction following</td>
</tr>
<tr>
<td><a href="https://huggingface.co/datasets/nvidia/OpenMathInstruct-2" target="_blank">OpenMathInstruct-2</a></td>
<td>51803</td>
<td>40188</td>
<td>11615</td>
<td>Mathematics</td>
</tr>
<tr>
<td><a href="https://huggingface.co/datasets/greengerong/leetcode" target="_blank">leetcode</a></td>
<td>3113</td>
<td>1877</td>
<td>1236</td>
<td>Coding</td>
</tr>
<tr>
<td><a href="https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k" target="_blank">self-oss-instruct-sc2</a></td>
<td>12892</td>
<td>10160</td>
<td>2732</td>
<td>Coding</td>
</tr>
<tr>
<td><a href="https://huggingface.co/datasets/llamafactory/alpaca_gpt4_zh" target="_blank">alpaca_gpt4_zh</a></td>
<td>2471</td>
<td>2471</td>
<td>0</td>
<td>Chinese Language</td>
</tr>
<tr>
<td><a href="https://huggingface.co/datasets/Magpie-Align/Magpie-Qwen2-Pro-200K-Chinese" target="_blank">Magpie-Qwen2-Pro</a></td>
<td>7481</td>
<td>7481</td>
<td>0</td>
<td>Chinese Language</td>
</tr>
<tr>
<td><strong>Total</strong></td>
<td>158667</td>
<td>94539</td>
<td>64128</td>
<td>All</td>
</tr>
</table>
## Training
The implicit model fusion process involves a two-stage training pipeline comprising Supervised Fine-Tuning (SFT) to mitigate distribution discrepancies between target and source LLMs, and Direct Preference Optimization (DPO) for learning preferences from multiple source LLMs.
### SFT
We used [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory) as our fine-tuning library. For all target models, we fine-tuned for 3 epochs, with a batch size of 128 and a maximum sequence length of 2048 tokens. A cosine learning rate schedule with a warmup ratio of 0.1 is employed. Different models' learning rates are shown in the table below.
<table class="js-sort-table table hidden">
<tr>
<td class="js-sort-string"><strong>Target Models</strong></td>
<td class="js-sort-string"><strong>Learning rate</strong></td>
</tr>
<tr>
<td>Llama-3.1-8B-Instruct</td>
<td>5e-6</td>
</tr>
<tr>
<td>Qwen-2.5-7B-Instruct</td>
<td>2e-6</td>
</tr>
<tr>
<td>Gemma-2-9B-It</td>
<td>2e-6</td>
</tr>
<tr>
<td>Llama-3.2-(1/3)B-Instruct</td>
<td>5e-6</td>
</tr>
</table>
### DPO
We used [alignment-handbook](https://github.com/huggingface/alignment-handbook) as our DPO training library. For all Target SFT models, we trained for 1 epoch, set maximum sequence length to 2048, used cosine learning rate with a warmup ratio of 0.1. We saved checkpoints every 100 steps and selected the best from the last two checkpoints. For Llama-3.1 and Llama-3.2 series models, we introduced length normalization in DPO training, as shown in the formula below.
}{\pi_{\text{ref}}(y_w|x)}-\frac{\beta}{|y_l|}\log\frac{\pi_\theta(y_l|x)}{\pi_{\text{ref}}(y_l|x)}\right))
Different models' hyperparameters are shown in the table below.
<table class="js-sort-table table hidden">
<tr>
<td class="js-sort-string"><strong>Target SFT Models</strong></td>
<td class="js-sort-string"><strong>Learning rate</strong></td>
<td class="js-sort-string"><strong>β</strong></td>
<td class="js-sort-string"><strong>Length normalize</strong></td>
</tr>
<tr>
<td>FuseChat-Llama-3.1-8B-SFT</td>
<td>8e-7</td>
<td>10</td>
<td>Yes</td>
</tr>
<tr>
<td>FuseChat-Qwen-2.5-7B-SFT</td>
<td>3e-7</td>
<td>0.01</td>
<td>No</td>
</tr>
<tr>
<td>FuseChat-Gemma-2-9B-SFT</td>
<td>5e-7</td>
<td>0.01</td>
<td>No</td>
</tr>
<tr>
<td>FuseChat-Llama-3.2-(1/3)B-SFT</td>
<td>1e-6</td>
<td>10</td>
<td>Yes</td>
</tr>
</table>
## Evaluation
The evaluation of instruction-tuned models mainly focuses on the model performance of instruction following, natural language understanding, general question answering, reasoning, mathematics, coding, etc. For the evaluation of FuseChat-3.0, we include 14 benchmarks and organize them into four categories:
- **Instruction Following** Tasks: AlpacaEval-2, Arena-Hard, MTbench, AlignBench v1.1 (Chinese).
- **General** Tasks: LiveBench-0831, MMLU-Pro, MMLU-redux, GPQA-Diamond.
- **Mathematics** Tasks: GSM8K, MATH, AMC 23.
- **Coding** Tasks: HumanEval, MBPP, LiveCodeBench 2408-2411.
We include more details and release our evaluation code at [FuseEval](https://github.com/SLIT-AI/FuseChat-3.0/FuseEval).
The evaluation results of five series fused models are as follows, showing that our FuseChat-3.0 models achieved varying degrees of improvement across different target models. When selecting Llama-3.1-8B-Instruct as the target model, our fusion model **FuseChat-Llama-3.1-8B-Instruct achieved an average performance improvement of 6.8 points across 14 benchmarks. Notably, it showed significant improvements of 37.1 and 30.1 points on instruction-following test sets AlpacaEval-2 and Arena-Hard respectively**. Additionally, FuseChat-Llama-3.1-8B-Instruct outperformed AllenAI's recently released Llama-3.1-Tulu-3-8B model on all benchmarks except GSM8K and GPQA-Diamond. All these results demonstrate the effectiveness and success of FuseChat-3.0.
### FuseChat-Llama-3.1-8B-Instruct Performance
<table class="js-sort-table table hidden">
<tr>
<td class="js-sort-string"><strong>Benchmarks</strong></td>
<td class="js-sort-string"><strong>Llama-3.1-8B-Instruct</strong></td>
<td class="js-sort-string"><strong>Llama-3.1-Tulu-3-8B</strong></td>
<td class="js-sort-string"><strong>FuseChat-Llama-3.1-8B-SFT</strong></td>
<td class="js-sort-string"><strong>FuseChat-Llama-3.1-8B-Instruct</strong></td>
</tr>
<tr>
<td style="white-space: nowrap;">AlpacaEval-2 (LC %)</td>
<td>28.3</td>
<td>33.4</td>
<td>41.3</td>
<td><strong>65.4</strong></td>
</tr>
<tr>
<td>Arena-Hard (WR %)</td>
<td>28.1</td>
<td>45.6</td>
<td>38.7</td>
<td><strong>58.2</strong></td>
</tr>
<tr>
<td>MT-Bench</td>
<td>8.4</td>
<td>8.3</td>
<td>8.5</td>
<td><strong>9.0</strong></td>
</tr>
<tr>
<td>AlignBench v1.1</td>
<td>4.6</td>
<td>6.2</td>
<td>6.3</td>
<td><strong>6.7</strong></td>
</tr>
<tr>
<td>GSM8K</td>
<td>85.9</td>
<td><strong>88.6</strong></td>
<td>87.0</td>
<td>88.0</td>
</tr>
<tr>
<td>MATH</td>
<td>50.7</td>
<td>47.5</td>
<td>54.7</td>
<td><strong>55.2</strong></td>
</tr>
<tr>
<td>AMC 23</td>
<td>25.0</td>
<td>25.0</td>
<td>30.0</td>
<td><strong>37.5</strong></td>
</tr>
<tr>
<td>LiveBench 0831</td>
<td>27.6</td>
<td>30.1</td>
<td>30.2</td>
<td><strong>32.0</strong></td>
</tr>
<tr>
<td>MMLU-Pro</td>
<td><strong>50.0</strong></td>
<td>42.9</td>
<td>47.8</td>
<td>49.2</td>
</tr>
<tr>
<td>MMLU-redux</td>
<td>67.2</td>
<td>66.3</td>
<td>68.4</td>
<td><strong>69.2</strong></td>
</tr>
<tr>
<td>GPQA-Diamond</td>
<td>33.8</td>
<td>35.9</td>
<td><strong>37.9</strong></td>
<td>34.9</td>
</tr>
<tr>
<td>HumanEval</td>
<td>69.5</td>
<td>66.5</td>
<td>69.5</td>
<td><strong>71.3</strong></td>
</tr>
<tr>
<td>MBPP</td>
<td><strong>75.4</strong></td>
<td>56.3</td>
<td>71.4</td>
<td>72.0</td>
</tr>
<tr>
<td>LiveCodeBench<br>2408-2411</td>
<td>12.3</td>
<td>10.6</td>
<td>12.6</td>
<td><strong>13.1</strong></td>
</tr>
<tr>
<td>Average</td>
<td>40.5</td>
<td>40.2</td>
<td>43.2</td>
<td><strong>47.3</strong></td>
</tr>
</table>
## Citation
```
@inproceedings{yang2025weightedreward,
title={Weighted-Reward Preference Optimization for Implicit Model Fusion},
author={Ziyi Yang and Fanqi Wan and Longguang Zhong and Tianyuan Shi and Xiaojun Quan},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=fq24pEb8SL}
}
@article{yang2025fusechat,
title={FuseChat-3.0: Preference Optimization Meets Heterogeneous Model Fusion},
author={Ziyi Yang and Fanqi Wan and Longguang Zhong and Canbin Huang and Guosheng Liang and Xiaojun Quan},
journal={arXiv preprint arXiv:2503.04222},
year={2025},
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/FuseAI__FuseChat-Llama-3.1-8B-Instruct-details)!
Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=FuseAI%2FFuseChat-Llama-3.1-8B-Instruct&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
| Metric |Value (%)|
|-------------------|--------:|
|**Average** | 25.64|
|IFEval (0-Shot) | 72.05|
|BBH (3-Shot) | 30.85|
|MATH Lvl 5 (4-Shot)| 7.02|
|GPQA (0-shot) | 7.38|
|MuSR (0-shot) | 6.15|
|MMLU-PRO (5-shot) | 30.37| |
Alphatao/8ea1178f-7714-4a61-9d8c-478a84876cab | Alphatao | 2025-03-08T04:38:36Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-03-07T19:49:33Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8ea1178f-7714-4a61-9d8c-478a84876cab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-0.5B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f57d828564838a69_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f57d828564838a69_train_data.json
type:
field_input: chosen
field_instruction: source
field_output: reject
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
device_map:
? ''
: 0,1,2,3,4,5,6,7
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
flash_attention: true
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id: Alphatao/8ea1178f-7714-4a61-9d8c-478a84876cab
hub_repo: null
hub_strategy: null
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.3
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 12288
micro_batch_size: 4
mlflow_experiment_name: /tmp/f57d828564838a69_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.00648773493710141
wandb_entity: null
wandb_mode: online
wandb_name: 64067547-49fe-44ed-9f15-08d6f4ccfab7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 64067547-49fe-44ed-9f15-08d6f4ccfab7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8ea1178f-7714-4a61-9d8c-478a84876cab
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6288
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 12288
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 1.426 | 0.0000 | 1 | 1.3817 |
| 1.0913 | 0.0042 | 100 | 0.8642 |
| 0.7442 | 0.0084 | 200 | 0.8486 |
| 0.7381 | 0.0125 | 300 | 0.8412 |
| 0.9318 | 0.0167 | 400 | 0.8342 |
| 0.9462 | 0.0209 | 500 | 0.8278 |
| 0.8658 | 0.0251 | 600 | 0.8247 |
| 1.2001 | 0.0293 | 700 | 0.8234 |
| 0.7767 | 0.0334 | 800 | 0.8127 |
| 1.1022 | 0.0376 | 900 | 0.8088 |
| 0.7616 | 0.0418 | 1000 | 0.8136 |
| 0.691 | 0.0460 | 1100 | 0.8051 |
| 0.814 | 0.0502 | 1200 | 0.8078 |
| 0.8866 | 0.0543 | 1300 | 0.8027 |
| 0.8338 | 0.0585 | 1400 | 0.8037 |
| 0.8899 | 0.0627 | 1500 | 0.8008 |
| 0.729 | 0.0669 | 1600 | 0.7922 |
| 0.621 | 0.0710 | 1700 | 0.7919 |
| 0.7033 | 0.0752 | 1800 | 0.7894 |
| 0.8998 | 0.0794 | 1900 | 0.7898 |
| 0.8322 | 0.0836 | 2000 | 0.7842 |
| 0.9954 | 0.0878 | 2100 | 0.7866 |
| 0.9642 | 0.0919 | 2200 | 0.7816 |
| 0.7502 | 0.0961 | 2300 | 0.7792 |
| 0.9017 | 0.1003 | 2400 | 0.7779 |
| 0.8775 | 0.1045 | 2500 | 0.7730 |
| 0.8333 | 0.1087 | 2600 | 0.7720 |
| 1.1947 | 0.1128 | 2700 | 0.7735 |
| 0.7781 | 0.1170 | 2800 | 0.7689 |
| 0.975 | 0.1212 | 2900 | 0.7695 |
| 0.8731 | 0.1254 | 3000 | 0.7673 |
| 0.7944 | 0.1296 | 3100 | 0.7640 |
| 0.6546 | 0.1337 | 3200 | 0.7609 |
| 0.5772 | 0.1379 | 3300 | 0.7556 |
| 0.9376 | 0.1421 | 3400 | 0.7527 |
| 0.6594 | 0.1463 | 3500 | 0.7574 |
| 0.7937 | 0.1505 | 3600 | 0.7506 |
| 0.6651 | 0.1546 | 3700 | 0.7490 |
| 0.787 | 0.1588 | 3800 | 0.7461 |
| 1.0014 | 0.1630 | 3900 | 0.7435 |
| 0.7214 | 0.1672 | 4000 | 0.7428 |
| 0.7854 | 0.1713 | 4100 | 0.7411 |
| 0.7552 | 0.1755 | 4200 | 0.7411 |
| 0.715 | 0.1797 | 4300 | 0.7366 |
| 0.6976 | 0.1839 | 4400 | 0.7356 |
| 0.9447 | 0.1881 | 4500 | 0.7350 |
| 0.8067 | 0.1922 | 4600 | 0.7292 |
| 1.0411 | 0.1964 | 4700 | 0.7274 |
| 0.643 | 0.2006 | 4800 | 0.7252 |
| 0.7939 | 0.2048 | 4900 | 0.7247 |
| 0.6452 | 0.2090 | 5000 | 0.7205 |
| 0.7369 | 0.2131 | 5100 | 0.7212 |
| 0.6581 | 0.2173 | 5200 | 0.7159 |
| 0.775 | 0.2215 | 5300 | 0.7138 |
| 0.6879 | 0.2257 | 5400 | 0.7118 |
| 0.8093 | 0.2299 | 5500 | 0.7093 |
| 0.7375 | 0.2340 | 5600 | 0.7127 |
| 0.6826 | 0.2382 | 5700 | 0.7046 |
| 0.9633 | 0.2424 | 5800 | 0.7016 |
| 0.8521 | 0.2466 | 5900 | 0.7043 |
| 0.7054 | 0.2508 | 6000 | 0.6990 |
| 0.6763 | 0.2549 | 6100 | 0.6957 |
| 0.836 | 0.2591 | 6200 | 0.6942 |
| 0.6314 | 0.2633 | 6300 | 0.6923 |
| 0.7427 | 0.2675 | 6400 | 0.6884 |
| 0.5987 | 0.2717 | 6500 | 0.6875 |
| 0.6365 | 0.2758 | 6600 | 0.6855 |
| 0.6329 | 0.2800 | 6700 | 0.6849 |
| 0.6765 | 0.2842 | 6800 | 0.6812 |
| 0.6983 | 0.2884 | 6900 | 0.6800 |
| 0.7398 | 0.2925 | 7000 | 0.6775 |
| 0.4994 | 0.2967 | 7100 | 0.6757 |
| 0.6947 | 0.3009 | 7200 | 0.6750 |
| 0.6398 | 0.3051 | 7300 | 0.6719 |
| 0.7557 | 0.3093 | 7400 | 0.6715 |
| 0.7419 | 0.3134 | 7500 | 0.6675 |
| 0.8206 | 0.3176 | 7600 | 0.6647 |
| 0.532 | 0.3218 | 7700 | 0.6639 |
| 0.6014 | 0.3260 | 7800 | 0.6642 |
| 0.7216 | 0.3302 | 7900 | 0.6612 |
| 0.6612 | 0.3343 | 8000 | 0.6572 |
| 0.7312 | 0.3385 | 8100 | 0.6561 |
| 0.5502 | 0.3427 | 8200 | 0.6556 |
| 0.7803 | 0.3469 | 8300 | 0.6531 |
| 0.3768 | 0.3511 | 8400 | 0.6518 |
| 0.7379 | 0.3552 | 8500 | 0.6514 |
| 0.5688 | 0.3594 | 8600 | 0.6522 |
| 0.7844 | 0.3636 | 8700 | 0.6492 |
| 0.7967 | 0.3678 | 8800 | 0.6480 |
| 0.6085 | 0.3720 | 8900 | 0.6469 |
| 0.5959 | 0.3761 | 9000 | 0.6460 |
| 0.7083 | 0.3803 | 9100 | 0.6445 |
| 0.9192 | 0.3845 | 9200 | 0.6426 |
| 0.8767 | 0.3887 | 9300 | 0.6406 |
| 0.6501 | 0.3928 | 9400 | 0.6397 |
| 0.6942 | 0.3970 | 9500 | 0.6384 |
| 0.5516 | 0.4012 | 9600 | 0.6378 |
| 0.563 | 0.4054 | 9700 | 0.6366 |
| 0.7784 | 0.4096 | 9800 | 0.6359 |
| 0.5832 | 0.4137 | 9900 | 0.6357 |
| 0.9015 | 0.4179 | 10000 | 0.6351 |
| 0.9016 | 0.4221 | 10100 | 0.6342 |
| 0.7122 | 0.4263 | 10200 | 0.6330 |
| 0.6701 | 0.4305 | 10300 | 0.6330 |
| 0.7161 | 0.4346 | 10400 | 0.6322 |
| 0.5625 | 0.4388 | 10500 | 0.6313 |
| 0.6608 | 0.4430 | 10600 | 0.6314 |
| 0.6945 | 0.4472 | 10700 | 0.6308 |
| 0.5166 | 0.4514 | 10800 | 0.6304 |
| 0.5635 | 0.4555 | 10900 | 0.6298 |
| 0.7577 | 0.4597 | 11000 | 0.6296 |
| 0.6303 | 0.4639 | 11100 | 0.6292 |
| 0.5529 | 0.4681 | 11200 | 0.6288 |
| 0.5988 | 0.4723 | 11300 | 0.6290 |
| 0.4636 | 0.4764 | 11400 | 0.6288 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Prog25/Stock_analysis | Prog25 | 2025-03-08T04:36:59Z | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
] | null | 2025-03-08T04:36:59Z | ---
license: artistic-2.0
---
|
saiteki-kai/Llama-Guard-3-1B-SFT-CLS-02 | saiteki-kai | 2025-03-08T04:36:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-Guard-3-8B",
"base_model:finetune:meta-llama/Llama-Guard-3-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-03-08T04:35:54Z | ---
base_model: meta-llama/Llama-Guard-3-8B
library_name: transformers
model_name: saiteki-kai/Llama-Guard-3-8B-SFT-BeaverTails
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for saiteki-kai/Llama-Guard-3-8B-SFT-BeaverTails
This model is a fine-tuned version of [meta-llama/Llama-Guard-3-8B](https://huggingface.co/meta-llama/Llama-Guard-3-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="saiteki-kai/Llama-Guard-3-1B-SFT-CLS-02", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/giuseppe-magazzu/llama-guard-finetuning/runs/mb1hirti)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.47.0
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
TareksTesting/Progenitor-Chrome-LLaMa-70B | TareksTesting | 2025-03-08T04:30:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:TareksLab/TestMergePart1",
"base_model:merge:TareksLab/TestMergePart1",
"base_model:TareksLab/TestMergePart2",
"base_model:merge:TareksLab/TestMergePart2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-08T04:13:11Z | ---
base_model:
- TareksLab/TestMergePart2
- TareksLab/TestMergePart1
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [NearSwap](https://huggingface.co/alchemonaut/QuartetAnemoi-70B-t0.0001) merge method using [TareksLab/TestMergePart2](https://huggingface.co/TareksLab/TestMergePart2) as a base.
### Models Merged
The following models were included in the merge:
* [TareksLab/TestMergePart1](https://huggingface.co/TareksLab/TestMergePart1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: TareksLab/TestMergePart2
- model: TareksLab/TestMergePart1
merge_method: nearswap
base_model: TareksLab/TestMergePart2
parameters:
t:
- value: 0.0001
dtype: bfloat16
```
|
pai123/DeepSeek-R1-distilled_model_2 | pai123 | 2025-03-08T04:20:49Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-08T04:20:28Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Yuhan123/mistral-7b-wildchat-semantics_var_4 | Yuhan123 | 2025-03-08T04:20:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-08T04:15:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aigdat/Qwen2.5-Coder-quantized-asym4-g128-onnx | aigdat | 2025-03-08T04:17:28Z | 0 | 0 | null | [
"onnx",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-7B-Instruct",
"region:us"
] | null | 2025-03-08T03:57:49Z | ---
base_model:
- Qwen/Qwen2.5-Coder-7B-Instruct
--- |
Subsets and Splits