modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-22 06:27:16
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 492
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-22 06:26:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Alphatao/5bb857c2-6667-4cf4-ba3a-9767a0cd0ee6 | Alphatao | 2025-04-28T09:16:48Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bloom",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:bigscience/bloom-560m",
"base_model:finetune:bigscience/bloom-560m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T18:36:59Z | ---
base_model: bigscience/bloom-560m
library_name: transformers
model_name: 5bb857c2-6667-4cf4-ba3a-9767a0cd0ee6
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 5bb857c2-6667-4cf4-ba3a-9767a0cd0ee6
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Alphatao/5bb857c2-6667-4cf4-ba3a-9767a0cd0ee6", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alphatao-alphatao/Gradients-On-Demand/runs/qm3kpc28)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
doll88253/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-woolly_thorny_anaconda | doll88253 | 2025-04-28T09:14:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am woolly thorny anaconda",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T02:57:08Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-woolly_thorny_anaconda
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am woolly thorny anaconda
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-woolly_thorny_anaconda
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="doll88253/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-woolly_thorny_anaconda", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ferry-unrest/voiceofritu | ferry-unrest | 2025-04-28T09:13:48Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-28T08:53:03Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: voiceofritu
---
# Voiceofritu
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `voiceofritu` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "voiceofritu",
"lora_weights": "https://huggingface.co/ferry-unrest/voiceofritu/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ferry-unrest/voiceofritu', weight_name='lora.safetensors')
image = pipeline('voiceofritu').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ferry-unrest/voiceofritu/discussions) to add images that show off what you’ve made with this LoRA.
|
wzx111/Qwen2.5-1.5B-Open-R1-GRPO | wzx111 | 2025-04-28T09:12:13Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:watermelonhjg/MATH-lighteval-level_2",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-01T02:49:31Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
datasets: watermelonhjg/MATH-lighteval-level_2
library_name: transformers
model_name: Qwen2.5-1.5B-Open-R1-GRPO
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Model Card for Qwen2.5-1.5B-Open-R1-GRPO
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [watermelonhjg/MATH-lighteval-level_2](https://huggingface.co/datasets/watermelonhjg/MATH-lighteval-level_2) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="wzx111/Qwen2.5-1.5B-Open-R1-GRPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kingpalace/open-r1-FRPO/runs/b0m3yg9q)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0.dev0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Deeksha0777/bigbird-bart-legal-summarizer | Deeksha0777 | 2025-04-28T09:10:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-04-28T08:37:14Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: bigbird-bart-legal-summarizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bigbird-bart-legal-summarizer
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.8966
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 11.4010 |
| No log | 2.0 | 2 | 11.4010 |
| No log | 3.0 | 3 | 10.8966 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu118
- Datasets 3.5.0
- Tokenizers 0.21.1
|
fgjg856hh/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tawny_enormous_starfish | fgjg856hh | 2025-04-28T09:10:34Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am tawny enormous starfish",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-23T21:22:01Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tawny_enormous_starfish
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am tawny enormous starfish
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tawny_enormous_starfish
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="fgjg856hh/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tawny_enormous_starfish", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
OWETBDD/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flightless_waddling_jackal | OWETBDD | 2025-04-28T09:09:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am flightless waddling jackal",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T04:25:13Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flightless_waddling_jackal
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am flightless waddling jackal
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flightless_waddling_jackal
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="OWETBDD/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flightless_waddling_jackal", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
shnl/aia-vietnamese-embedding | shnl | 2025-04-28T09:08:36Z | 50 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1109251",
"loss:Matryoshka2dLoss",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"dataset:shnl/vn-embed-r1-2",
"arxiv:1908.10084",
"arxiv:2402.14776",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:VoVanPhuc/sup-SimCSE-VietNamese-phobert-base",
"base_model:finetune:VoVanPhuc/sup-SimCSE-VietNamese-phobert-base",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-04-21T03:05:50Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1109251
- loss:Matryoshka2dLoss
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: VoVanPhuc/sup-SimCSE-VietNamese-phobert-base
widget:
- source_sentence: Prudie nghĩ lý_tưởng nhất là chọn một món quà có suy_nghĩ thực_sự
đằng sau nó mà không khiến bạn tốn quá nhiều tiền .
sentences:
- Nghiên cứu đã nói về một người được thuê để giúp đỡ với sự biến đổi .
- Lý_tưởng nhất là nhận được một món quà rẻ_tiền mà không đắt tiền .
- Không sao đâu , chỉ cần ở cạnh bến tàu là đủ .
- source_sentence: Anh nghĩ chúng_ta làm thế bởi_vì chúng_ta muốn nó ngăn_chặn tội_ác
hay không phải vì chúng_ta không muốn trả tiền cho tù_nhân để ở trong tù .
sentences:
- Có một cái thụt trên xe của tôi .
- Anh nghĩ chúng_ta làm thế bởi_vì chúng_ta muốn nó ngăn_chặn tội_ác hay gì đó sao
?
- Chúng_tôi không muốn tù_nhân ở trong tù , để cho họ tự_do !
- source_sentence: Ghi_chép lại các ý_tưởng sáng_tạo và kế_hoạch cho một dự_án mới
. Xây_dựng một ứng_dụng di_động để hổ_trợ học_tập .
sentences:
- 'Để ghi_chép lại các ý_tưởng sáng_tạo và kế_hoạch cho dự_án xây_dựng ứng_dụng
di_động hổ_trợ học_tập , bạn có_thể thực_hiện các bước sau : 1 . Tìm_hiểu về nhu_cầu
và yêu_cầu của người dùng : - Phân_tích đối_tượng người dùng chính và nhu_cầu
học_tập của họ . - Nghiên_cứu các ứng_dụng di_động hiện có để tìm_hiểu những tính_năng
phổ_biến và thiếu_sót . 2 . Xác_định mục_tiêu chính và tính_năng cần có của ứng_dụng
: - Xác_định mục_tiêu chính của ứng_dụng , ví_dụ : cung_cấp tài_liệu học_tập ,
hỗ_trợ giải_đáp câu hỏi , tổ_chức lịch_học , v.v. - Liệt_kê các tính_năng cần
có để đáp_ứng nhu_cầu học_tập của người dùng . 3 . Tạo thiết_kế giao_diện người
dùng : - Vẽ sơ_đồ giao_diện và xác_định cấu_trúc menu . - Thiết_kế giao_diện người
dùng đơn_giản , trực_quan và dễ sử_dụng . 4 . Phát_triển và kiểm_thử ứng_dụng
: - Sử_dụng môi_trường phát_triển phù_hợp để xây_dựng ứng_dụng di_động . - Tiến_hành
kiểm_thử từng tính_năng để đảm_bảo tính ổn_định và tương_thích trên các thiết_bị
khác nhau . 5 . Triển_khai và quảng_bá ứng_dụng : - Đăng_ký tài_khoản nhà phát_triển
trên nền_tảng cung_cấp ứng_dụng di_động ( ví_dụ : App_Store , Google_Play ) .
- Tiến_hành quảng_bá ứng_dụng thông_qua công_cụ tiếp_thị trực_tuyến , mạng xã_hội
, và các kênh khác . 6 . Đánh_giá và cải_tiến : - Theo_dõi phản_hồi từ người dùng
và tiếp_nhận đề_xuất cải_tiến . - Cập_nhật và nâng_cấp ứng_dụng dựa trên phản_hồi
và yêu_cầu của người dùng . Ghi_chú : - Hãy đảm_bảo tuân_thủ các quy_định và chính_sách
phát_triển ứng_dụng của nền_tảng cung_cấp . - Lên kế_hoạch và quản_lý thời_gian
phù_hợp để đảm_bảo tiến_độ phát_triển ứng_dụng được thực_hiện đúng hẹn .'
- Bạn nghĩ gì về việc tổ chức một cuộc tranh luận cấp thủ tướng ở Ấn Độ giống như
cuộc tranh luận Tổng thống ở Mỹ?
- '" Giải_vô_địch bóng_đá thế_giới 2022 sẽ được tổ_chức tại Qatar từ ngày 21/11/2022
đến ngày 18/12/2022 . Đây là một sự_kiện thể_thao quy_mô lớn thu_hút sự tham_gia
của các đội bóng_đá hàng_đầu trên toàn_cầu . Các trận đấu sẽ diễn ra tại các sân_vận_động
đẳng_cấp và sẽ mang đến những trận cầu hấp_dẫn và kịch_tính cho người hâm_mộ bóng_đá
trên khắp thế_giới . "'
- source_sentence: Nhưng bộ lọc không chỉ dành cho cha mẹ lo lắng .
sentences:
- Các bộ lọc không phải là vấn đề chỉ dành cho cha mẹ .
- Vâng , tôi sẽ đồng_ý .
- Cổ văn giải thích lý do tại sao nước swirls .
- source_sentence: Công_chúa sơ_sinh đã qua_đời khi nào ?
sentences:
- Vắc-xin bệnh dại là một loại vắc_xin sử_dụng để ngăn_ngừa bệnh dại . Có một_số
loại vắc-xin có sẵn an_toàn và hiệu_quả . Vắc-xin có_thể được sử_dụng để ngăn_ngừa
bệnh dại trước và trong một khoảng thời_gian sau khi tiếp_xúc với vi-rút_dại từ
chó hoặc dơi cắn . Khả_năng miễn_dịch phát_triển lâu_dài sau khi được tiêm_chủng
đầy_đủ . Vắc-xin được tiêm qua da hoặc cơ . Sau khi tiêm_chủng tiếp_xúc thường
được sử_dụng cùng với immunoglobulin bệnh dại . Những người có nguy_cơ phơi nhiễm
cao được khuyến_cáo nên chủng ngừa trước . Vắc-xin có hiệu_quả ở người và các
động_vật khác . Chủng_ngừa cho chó rất hiệu_quả trong việc ngăn_ngừa sự lây_lan
bệnh dại sang người .
- Anh ta sẽ bắt_đầu tập_luyện .
- 'Vào ngày 8 tháng 3 , tình_trạng của cô bé tiếp_tục xấu đi và công_chúa sơ_sinh
qua_đời lúc 3 : 38 sáng , khi mới 5 tháng tuổi . Thiên_hoàng đã ra_lệnh dừng hành_động
quân_đội của họ trong ngày ; ông cũng ra_lệnh cho một ngày quốc_tang . Vào ngày
13 tháng 3 , công_chúa được chôn_cất trong một buổi lễ đơn_giản tại nghĩa_trang
Toshimagaoka . Hoàng_hậu bị suy_sụp ; bà đã giữ một con búp_bê có kích_thước tương_đương
với Sachiko trong nhiều ngày và không có thêm một đứa con nữa trong vài năm sau
đó .'
datasets:
- shnl/vn-embed-r1-2
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
model-index:
- name: SentenceTransformer based on VoVanPhuc/sup-SimCSE-VietNamese-phobert-base
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev
type: sts-dev
metrics:
- type: pearson_cosine
value: 0.7476549793083225
name: Pearson Cosine
- type: spearman_cosine
value: 0.7349650844630721
name: Spearman Cosine
---
# SentenceTransformer based on VoVanPhuc/sup-SimCSE-VietNamese-phobert-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [VoVanPhuc/sup-SimCSE-VietNamese-phobert-base](https://huggingface.co/VoVanPhuc/sup-SimCSE-VietNamese-phobert-base) on the [vn-embed-r1-2](https://huggingface.co/datasets/shnl/vn-embed-r1-2) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [VoVanPhuc/sup-SimCSE-VietNamese-phobert-base](https://huggingface.co/VoVanPhuc/sup-SimCSE-VietNamese-phobert-base) <!-- at revision 608779b86741a8acd8c8d38132974ff04086b138 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [vn-embed-r1-2](https://huggingface.co/datasets/shnl/vn-embed-r1-2)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Công_chúa sơ_sinh đã qua_đời khi nào ?',
'Vào ngày 8 tháng 3 , tình_trạng của cô bé tiếp_tục xấu đi và công_chúa sơ_sinh qua_đời lúc 3 : 38 sáng , khi mới 5 tháng tuổi . Thiên_hoàng đã ra_lệnh dừng hành_động quân_đội của họ trong ngày ; ông cũng ra_lệnh cho một ngày quốc_tang . Vào ngày 13 tháng 3 , công_chúa được chôn_cất trong một buổi lễ đơn_giản tại nghĩa_trang Toshimagaoka . Hoàng_hậu bị suy_sụp ; bà đã giữ một con búp_bê có kích_thước tương_đương với Sachiko trong nhiều ngày và không có thêm một đứa con nữa trong vài năm sau đó .',
'Vắc-xin bệnh dại là một loại vắc_xin sử_dụng để ngăn_ngừa bệnh dại . Có một_số loại vắc-xin có sẵn an_toàn và hiệu_quả . Vắc-xin có_thể được sử_dụng để ngăn_ngừa bệnh dại trước và trong một khoảng thời_gian sau khi tiếp_xúc với vi-rút_dại từ chó hoặc dơi cắn . Khả_năng miễn_dịch phát_triển lâu_dài sau khi được tiêm_chủng đầy_đủ . Vắc-xin được tiêm qua da hoặc cơ . Sau khi tiêm_chủng tiếp_xúc thường được sử_dụng cùng với immunoglobulin bệnh dại . Những người có nguy_cơ phơi nhiễm cao được khuyến_cáo nên chủng ngừa trước . Vắc-xin có hiệu_quả ở người và các động_vật khác . Chủng_ngừa cho chó rất hiệu_quả trong việc ngăn_ngừa sự lây_lan bệnh dại sang người .',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-dev`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:----------|
| pearson_cosine | 0.7477 |
| **spearman_cosine** | **0.735** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### vn-embed-r1-2
* Dataset: [vn-embed-r1-2](https://huggingface.co/datasets/shnl/vn-embed-r1-2) at [4219d20](https://huggingface.co/datasets/shnl/vn-embed-r1-2/tree/4219d20ea092aff57da1b9fef6a9c9db9ae7ef61)
* Size: 1,109,251 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 26.8 tokens</li><li>max: 221 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 26.39 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 25.16 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:------------------------------------------------------------|:---------------------------------------------------------------------------|:-------------------------------------------------------------|
| <code>Bạn có_thể làm điều đó với tôi . ถาม</code> | <code>Bạn có_thể làm điều đó với tôi không ?_bạn có_thể làm điều đó</code> | <code>Vâng , cái đó hoàn_toàn không phải vậy .</code> |
| <code>Ở rìa của bạn</code> | <code>Rời xa chân_trời của bạn</code> | <code>Người phụ_nữ này không bị bóp_cổ vào năm 1983 .</code> |
| <code>Hơi thở của bathala ! Đã nói là người da đen .</code> | <code>Người đàn ông đen tối đang nói chuyện .</code> | <code>Người đàn ông đen tối vẫn im lặng .</code> |
* Loss: [<code>Matryoshka2dLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshka2dloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"n_layers_per_step": 1,
"last_layer_weight": 1.0,
"prior_layers_weight": 1.0,
"kl_div_weight": 1.0,
"kl_temperature": 0.3,
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": 1
}
```
### Evaluation Dataset
#### vn-embed-r1-2
* Dataset: [vn-embed-r1-2](https://huggingface.co/datasets/shnl/vn-embed-r1-2) at [4219d20](https://huggingface.co/datasets/shnl/vn-embed-r1-2/tree/4219d20ea092aff57da1b9fef6a9c9db9ae7ef61)
* Size: 195,751 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 26.62 tokens</li><li>max: 216 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 23.98 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 23.07 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------|
| <code>Rodriguez dường_như đã 50 tuổi .</code> | <code>Rodriguez là người trung_niên .</code> | <code>Rodriguez còn rất trẻ .</code> |
| <code>Nằm trên sông seine , cánh cổng xinh đẹp này đã chứng kiến sự khởi đầu của nhiều cuộc phiêu lưu biển ' bao gồm cả samuel de cham ? Đơn giản là khởi hành cho những gì sẽ trở thành quebec ' và vẫn là một thánh đường cho thủy thủ .</code> | <code>Rất nhiều thủy thủ nổi tiếng đã rời khỏi cảng này trên sông seine .</code> | <code>Đây là một cảng nhỏ , và không có nhiều thủy thủ đến thăm nó ngày hôm này .</code> |
| <code>Trung_Mỹ Bạn đã từng sống ở Trung_Mỹ chưa ?</code> | <code>Tôi biết bạn sống ở Trung_Mỹ khi lớn lên , nhưng bạn đến Hoa_Kỳ khi nào ?</code> | <code>Thư_viện giống của Albron hơn .</code> |
* Loss: [<code>Matryoshka2dLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshka2dloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"n_layers_per_step": 1,
"last_layer_weight": 1.0,
"prior_layers_weight": 1.0,
"kl_div_weight": 1.0,
"kl_temperature": 0.3,
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": 1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | sts-dev_spearman_cosine |
|:------:|:-----:|:-------------:|:---------------:|:-----------------------:|
| 0.5769 | 10000 | 0.7734 | 0.5230 | 0.7350 |
### Framework Versions
- Python: 3.10.17
- Sentence Transformers: 4.1.0
- Transformers: 4.49.0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.4.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### Matryoshka2dLoss
```bibtex
@misc{li20242d,
title={2D Matryoshka Sentence Embeddings},
author={Xianming Li and Zongxi Li and Jing Li and Haoran Xie and Qing Li},
year={2024},
eprint={2402.14776},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
jinaai/jina-clip-v2 | jinaai | 2025-04-28T09:08:11Z | 47,561 | 219 | transformers | [
"transformers",
"pytorch",
"onnx",
"safetensors",
"jina_clip",
"feature-extraction",
"xlm-roberta",
"eva02",
"clip",
"sentence-similarity",
"retrieval",
"multimodal",
"multi-modal",
"crossmodal",
"cross-modal",
"mteb",
"clip-benchmark",
"vidore",
"sentence-transformers",
"transformers.js",
"custom_code",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:2412.08802",
"arxiv:2405.20204",
"base_model:jinaai/xlm-roberta-flash-implementation",
"base_model:quantized:jinaai/xlm-roberta-flash-implementation",
"license:cc-by-nc-4.0",
"region:eu"
] | feature-extraction | 2024-10-08T14:34:45Z | ---
base_model:
- jinaai/xlm-roberta-flash-implementation
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
library_name: transformers
license: cc-by-nc-4.0
tags:
- xlm-roberta
- eva02
- clip
- feature-extraction
- sentence-similarity
- retrieval
- multimodal
- multi-modal
- crossmodal
- cross-modal
- mteb
- clip-benchmark
- vidore
- transformers
- sentence-transformers
- onnx
- safetensors
- transformers.js
inference: false
---
<br><br>
<p align="center">
<img src="https://huggingface.co/datasets/jinaai/documentation-images/resolve/main/logo.webp" alt="Jina AI: Your Search Foundation, Supercharged!" width="150px">
</p>
<p align="center">
<b>The embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b>
</p>
<p align="center">
<b>Jina CLIP v2: Multilingual Multimodal Embeddings for Texts and Images</b>
</p>
This model is based on the paper [jina-clip-v2: Multilingual Multimodal Embeddings for Text and Images](https://huggingface.co/papers/2412.08802).
## Quick Start
[Blog](https://jina.ai/news/jina-clip-v2-multilingual-multimodal-embeddings-for-text-and-images) | [Technical Report](https://arxiv.org/abs/2412.08802) | [Azure](https://azuremarketplace.microsoft.com/en-gb/marketplace/apps/jinaai.jina-clip-v2-vm?tab=Overview) | [AWS SageMaker](https://aws.amazon.com/marketplace/pp/prodview-bfbctuqmky676) | [Google Cloud Platform](https://console.cloud.google.com/marketplace/browse?hl=en&inv=1&invt=AbiD-g&q=jina) | [API](https://jina.ai/embeddings)
## Intended Usage & Model Info
`jina-clip-v2` is a **general-purpose multilingual multimodal embedding model for text & images**.
Multimodal embeddings enable searching and understanding data across different modalities through a coherent representation. They serve as the backbone of neural information retrieval and multimodal GenAI applications.
Built upon [`jina-clip-v1`](https://huggingface.co/jinaai/jina-clip-v1) and our recently released [`jina-embeddings-v3`](https://huggingface.co/jinaai/jina-embeddings-v3), `jina-clip-v2` features several significant improvements:
* **Improved Performance**: v2 shows a 3% performance improvement over v1 in both text-image and text-text retrieval tasks. Similar to v1, v2's text encoder can serve as an effective multilingual long-context dense retriever. It performs on par with our frontier model `jina-embeddings-v3` (currently the best multilingual embeddings under 1B parameters on MTEB).
* **Multilingual Support**: Using the same backbone as `jina-embeddings-v3` for the text tower, `jina-clip-v2` supports 89 languages for multilingual-image retrieval, showing up to 4% improvement compared to `nllb-clip-large-siglip` on multilingual image retrieval tasks.
* **Higher Image Resolution**: v2 now supports 512x512 input image resolution, a significant increase from v1's 224x224. This higher resolution enables better processing of detailed images, improved feature extraction, and more accurate recognition of fine-grained visual elements.
* **Matryoshka Representations**: v2 allows users to truncate the output dimensions of both text and image embeddings from 1024 down to 64, reducing storage and processing overhead while maintaining strong performance.
Measuring 0.9B parameters, `jina-clip-v2` combines two powerful encoders:
* the text encoder `Jina-XLM-RoBERTa` (the backbone of `jina-embeddings-v3`) and
* the vision encoder `EVA02-L14` (an efficient vision Transformer developed by BAAI).
| FEATURE | TEXT ENCODER | IMAGE ENCODER |
|-----------------------|-------------------------|------------------|
| Base Model | Jina-XLM-RoBERTa | EVA02-L |
| Parameters | 561M | 304M |
| Input Specification | 8,192 tokens (max) | 512×512 pixels |
| Min Output Dimensions | 64 | 64 |
| Max Output Dimensions | 1,024 | 1,024 |
| Layers | 24 | 24 |
| Attention Mechanism | FlashAttention2 | xFormers |
| Pooling Strategy | Mean pooling | CLS pooling |
| Additional Features | 89 languages supported | Patch size 14x14 |
These encoders are jointly trained to create aligned representations of images and text.
CLIP-like models have established themselves as the backbone for general-purpose multimodal applications. With `jina-clip-v2`, we're taking these capabilities to the next level, breaking down language barriers to deliver more accurate cross-modal understanding and retrieval. We're confident this release delivers a promise in making multimodal search and retrieval both more powerful and more accessible to developers worldwide.
## Training, Data, Parameters
Please refer to our [technical report of jina-clip-v2](https://arxiv.org/abs/2412.08802) for the model and training details.
[technical report of jina-clip-v1](https://arxiv.org/abs/2405.20204)
## Faster Inference: FA2, XFormers and bf16
On a CUDA enabled torch environment, the model comes in `torch.bfloat16`
precision by default. It is highly recommended to install
[FlashAttention](https://github.com/Dao-AILab/flash-attention?tab=readme-ov-file#installation-and-features)
and [xFormers](https://github.com/facebookresearch/xformers?tab=readme-ov-file#installing-xformers)
to make use of their efficient attention mechanism implementations.
## Usage
<details>
<summary>via Jina AI <a href="https://jina.ai/embeddings/">Embedding API</a></summary>
```bash
curl https://api.jina.ai/v1/embeddings \
-H "Content-Type: application/json" \
-H "Authorization: Bearer [JINA_AI_API_TOKEN]" \
-d @- <<EOFEOF
{
"model": "jina-clip-v2",
"dimensions": 1024,
"task": "retrieval.query",
"normalized": true,
"embedding_type": "float",
"input": [
{
"text": "غروب جميل على الشاطئ"
},
{
"text": "海滩上美丽的日落"
},
{
"text": "A beautiful sunset over the beach"
},
{
"text": "Un beau coucher de soleil sur la plage"
},
{
"text": "Ein wunderschöner Sonnenuntergang am Strand"
},
{
"text": "Ένα όμορφο ηλιοβασίλεμα πάνω από την παραλία"
},
{
"text": "समुद्र तट पर एक खूबसूरत सूर्यास्त"
},
{
"text": "Un bellissimo tramonto sulla spiaggia"
},
{
"text": "浜辺に沈む美しい夕日"
},
{
"text": "해변 위로 아름다운 일몰"
},
{
"image": "https://i.ibb.co/nQNGqL0/beach1.jpg"
},
{
"image": "https://i.ibb.co/r5w8hG8/beach2.jpg"
}
]
}
EOFEOF
```
</details>
<details>
<summary>via <a href="https://huggingface.co/docs/transformers/en/index">transformers</a></summary>
```python
# !pip install transformers einops timm pillow
from transformers import AutoModel
# Initialize the model
model = AutoModel.from_pretrained('jinaai/jina-clip-v2', trust_remote_code=True)
# Corpus
sentences = [
'غروب جميل على الشاطئ', # Arabic
'海滩上美丽的日落', # Chinese
'Un beau coucher de soleil sur la plage', # French
'Ein wunderschöner Sonnenuntergang am Strand', # German
'Ένα όμορφο ηλιοβασίλεμα πάνω από την παραλία', # Greek
'समुद्र तट पर एक खूबसूरत सूर्यास्त', # Hindi
'Un bellissimo tramonto sulla spiaggia', # Italian
'浜辺に沈む美しい夕日', # Japanese
'해변 위로 아름다운 일몰', # Korean
]
# Public image URLs or PIL Images
image_urls = ['https://i.ibb.co/nQNGqL0/beach1.jpg', 'https://i.ibb.co/r5w8hG8/beach2.jpg']
# Choose a matryoshka dimension, set to None to get the full 1024-dim vectors
truncate_dim = 512
# Encode text and images
text_embeddings = model.encode_text(sentences, truncate_dim=truncate_dim)
image_embeddings = model.encode_image(
image_urls, truncate_dim=truncate_dim
) # also accepts PIL.Image.Image, local filenames, dataURI
# Encode query text
query = 'beautiful sunset over the beach' # English
query_embeddings = model.encode_text(
query, task='retrieval.query', truncate_dim=truncate_dim
)
# Text to Image
print('En -> Img: ' + str(query_embeddings @ image_embeddings[0].T))
# Image to Image
print('Img -> Img: ' + str(image_embeddings[0] @ image_embeddings[1].T))
# Text to Text
print('En -> Ar: ' + str(query_embeddings @ text_embeddings[0].T))
print('En -> Zh: ' + str(query_embeddings @ text_embeddings[1].T))
print('En -> Fr: ' + str(query_embeddings @ text_embeddings[2].T))
print('En -> De: ' + str(query_embeddings @ text_embeddings[3].T))
print('En -> Gr: ' + str(query_embeddings @ text_embeddings[4].T))
print('En -> Hi: ' + str(query_embeddings @ text_embeddings[5].T))
print('En -> It: ' + str(query_embeddings @ text_embeddings[6].T))
print('En -> Jp: ' + str(query_embeddings @ text_embeddings[7].T))
print('En -> Ko: ' + str(query_embeddings @ text_embeddings[8].T))
```
</details>
<details>
<summary>via <a href="https://sbert.net/">sentence-transformers</a></summary>
```python
# !pip install sentence-transformers einops timm pillow
from sentence_transformers import SentenceTransformer
# Choose a matryoshka dimension
truncate_dim = 512
# Initialize the model
model = SentenceTransformer(
'jinaai/jina-clip-v2', trust_remote_code=True, truncate_dim=truncate_dim
)
# Corpus
sentences = [
'غروب جميل على الشاطئ', # Arabic
'海滩上美丽的日落', # Chinese
'Un beau coucher de soleil sur la plage', # French
'Ein wunderschöner Sonnenuntergang am Strand', # German
'Ένα όμορφο ηλιοβασίλεμα πάνω από την παραλία', # Greek
'समुद्र तट पर एक खूबसूरत सूर्यास्त', # Hindi
'Un bellissimo tramonto sulla spiaggia', # Italian
'浜辺に沈む美しい夕日', # Japanese
'해변 위로 아름다운 일몰', # Korean
]
# Public image URLs or PIL Images
image_urls = ['https://i.ibb.co/nQNGqL0/beach1.jpg', 'https://i.ibb.co/r5w8hG8/beach2.jpg']
# Encode text and images
text_embeddings = model.encode(sentences, normalize_embeddings=True)
image_embeddings = model.encode(
image_urls, normalize_embeddings=True
) # also accepts PIL.Image.Image, local filenames, dataURI
# Encode query text
query = 'beautiful sunset over the beach' # English
query_embeddings = model.encode(
query, prompt_name='retrieval.query', normalize_embeddings=True
)
```
</details>
<details>
<summary>via <a href="https://huggingface.co/docs/transformers.js/en/index">transformers.js</a></summary>
> [!NOTE]
> JinaCLIP was added in Transformers.js v3.1.0, so make sure you're using a compatible version!
> See the [release notes](https://github.com/huggingface/transformers.js/releases/tag/3.1.0) for more information.
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Compute text and/or image embeddings with `jinaai/jina-clip-v2`:
```js
import { AutoModel, AutoProcessor, RawImage, matmul } from "@huggingface/transformers";
// Load processor and model
const model_id = "jinaai/jina-clip-v2";
const processor = await AutoProcessor.from_pretrained(model_id);
const model = await AutoModel.from_pretrained(model_id, { dtype: "q4" /* e.g., "fp16", "q8", or "q4" */ });
// Prepare inputs
const urls = ["https://i.ibb.co/nQNGqL0/beach1.jpg", "https://i.ibb.co/r5w8hG8/beach2.jpg"];
const images = await Promise.all(urls.map(url => RawImage.read(url)));
const sentences = [
"غروب جميل على الشاطئ", // Arabic
"海滩上美丽的日落", // Chinese
"Un beau coucher de soleil sur la plage", // French
"Ein wunderschöner Sonnenuntergang am Strand", // German
"Ένα όμορφο ηλιοβασίλεμα πάνω από την παραλία", // Greek
"समुद्र तट पर एक खूबसूरत सूर्यास्त", // Hindi
"Un bellissimo tramonto sulla spiaggia", // Italian
"浜辺に沈む美しい夕日", // Japanese
"해변 위로 아름다운 일몰", // Korean
];
// Encode text and images
const inputs = await processor(sentences, images, { padding: true, truncation: true });
const { l2norm_text_embeddings, l2norm_image_embeddings } = await model(inputs);
// Encode query (text-only)
const query_prefix = "Represent the query for retrieving evidence documents: ";
const query_inputs = await processor(query_prefix + "beautiful sunset over the beach");
const { l2norm_text_embeddings: query_embeddings } = await model(query_inputs);
// Compute text-image similarity scores
const text_to_image_scores = await matmul(query_embeddings, l2norm_image_embeddings.transpose(1, 0));
console.log("text-image similarity scores", text_to_image_scores.tolist()[0]); // [0.29530206322669983, 0.3183615803718567]
// Compute image-image similarity scores
const image_to_image_score = await matmul(l2norm_image_embeddings[0], l2norm_image_embeddings[1]);
console.log("image-image similarity score", image_to_image_score.item()); // 0.9344457387924194
// Compute text-text similarity scores
const text_to_text_scores = await matmul(query_embeddings, l2norm_text_embeddings.transpose(1, 0));
console.log("text-text similarity scores", text_to_text_scores.tolist()[0]); // [0.5566609501838684, 0.7028406858444214, 0.582255482673645, 0.6648036241531372, 0.5462006330490112, 0.6791588068008423, 0.6192430257797241, 0.6258729100227356, 0.6453716158866882]
```
</details>
<details>
<summary>via the <a href="https://onnxruntime.ai/">ONNX Runtime</a></summary>
```python
# !pip install transformers onnxruntime pillow
import onnxruntime as ort
from transformers import AutoImageProcessor, AutoTokenizer
# Load tokenizer and image processor using transformers
tokenizer = AutoTokenizer.from_pretrained('jinaai/jina-clip-v2', trust_remote_code=True)
image_processor = AutoImageProcessor.from_pretrained(
'jinaai/jina-clip-v2', trust_remote_code=True
)
# Corpus
sentences = [
'غروب جميل على الشاطئ', # Arabic
'海滩上美丽的日落', # Chinese
'Un beau coucher de soleil sur la plage', # French
'Ein wunderschöner Sonnenuntergang am Strand', # German
'Ένα όμορφο ηλιοβασίλεμα πάνω από την παραλία', # Greek
'समुद्र तट पर एक खूबसूरत सूर्यास्त', # Hindi
'Un bellissimo tramonto sulla spiaggia', # Italian
'浜辺に沈む美しい夕日', # Japanese
'해변 위로 아름다운 일몰', # Korean
]
# Public image URLs or PIL Images
image_urls = ['https://i.ibb.co/nQNGqL0/beach1.jpg', 'https://i.ibb.co/r5w8hG8/beach2.jpg']
# Tokenize input texts and transform input images
input_ids = tokenizer(sentences, return_tensors='np')['input_ids']
pixel_values = image_processor(image_urls)['pixel_values']
# Start an ONNX Runtime Session
session = ort.InferenceSession('jina-clip-v2/onnx/model.onnx')
# Run inference
output = session.run(None, {'input_ids': input_ids, 'pixel_values': pixel_values})
# Keep the normalised embeddings, first 2 outputs are un-normalized
_, _, text_embeddings, image_embeddings = output
```
</details>
## License
This model is licensed to download and run under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/deed.en). It is available for commercial use via the [Jina Embeddings API](https://jina.ai/embeddings/), [AWS](https://aws.amazon.com/marketplace/pp/prodview-bfbctuqmky676), [Azure](https://azuremarketplace.microsoft.com/en-gb/marketplace/apps/jinaai.jina-clip-v2-vm?tab=Overview), and [GCP](https://console.cloud.google.com/marketplace/browse?hl=en&inv=1&invt=AbiFWQ&q=jina). To download for commercial use, please [contact us](https://jina.ai/contact-sales).
## Contact
Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas.
## Citation
If you find `jina-clip-v2` useful in your research, please cite the following paper:
```bibtex
@misc{koukounas2024jinaclipv2multilingualmultimodalembeddings,
title={jina-clip-v2: Multilingual Multimodal Embeddings for Text and Images},
author={Andreas Koukounas and Georgios Mastrapas and Bo Wang and Mohammad Kalim Akram and Sedigheh Eslami and Michael Günther and Isabelle Mohr and Saba Sturua and Scott Martens and Nan Wang and Han Xiao},
year={2024},
eprint={2412.08802},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.08802},
}
``` |
MJ92/AceGPT-v2-8B-Chat_finetuned_500_FR | MJ92 | 2025-04-28T09:03:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T08:51:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/GLM-Z1-9B-0414-Q4_K_S-GGUF | Triangle104 | 2025-04-28T09:03:28Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zh",
"en",
"base_model:THUDM/GLM-Z1-9B-0414",
"base_model:quantized:THUDM/GLM-Z1-9B-0414",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-28T08:59:06Z | ---
base_model: THUDM/GLM-Z1-9B-0414
language:
- zh
- en
library_name: transformers
license: mit
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/GLM-Z1-9B-0414-Q4_K_S-GGUF
This model was converted to GGUF format from [`THUDM/GLM-Z1-9B-0414`](https://huggingface.co/THUDM/GLM-Z1-9B-0414) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/THUDM/GLM-Z1-9B-0414) for more details on the model.
---
Introduction
-
The GLM family welcomes a new generation of open-source models, the GLM-4-32B-0414
series, featuring 32 billion parameters. Its performance is comparable
to OpenAI's GPT series and DeepSeek's V3/R1 series, and it supports very
user-friendly local deployment features. GLM-4-32B-Base-0414 was
pre-trained on 15T of high-quality data, including a large amount of
reasoning-type synthetic data, laying the foundation for subsequent
reinforcement learning extensions. In the post-training stage, in
addition to human preference alignment for dialogue scenarios, we also
enhanced the model's performance in instruction following, engineering
code, and function calling using techniques such as rejection sampling
and reinforcement learning, strengthening the atomic capabilities
required for agent tasks. GLM-4-32B-0414 achieves good results in areas
such as engineering code, Artifact generation, function calling,
search-based Q&A, and report generation. Some benchmarks even rival
larger models like GPT-4o and DeepSeek-V3-0324 (671B).
GLM-Z1-9B-0414 is a surprise. We employed the
aforementioned series of techniques to train a 9B small-sized model that
maintains the open-source tradition. Despite its smaller scale,
GLM-Z1-9B-0414 still exhibits excellent capabilities in mathematical
reasoning and general tasks. Its overall performance is already at a
leading level among open-source models of the same size. Especially in
resource-constrained scenarios, this model achieves an excellent balance
between efficiency and effectiveness, providing a powerful option for
users seeking lightweight deployment
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/GLM-Z1-9B-0414-Q4_K_S-GGUF --hf-file glm-z1-9b-0414-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/GLM-Z1-9B-0414-Q4_K_S-GGUF --hf-file glm-z1-9b-0414-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/GLM-Z1-9B-0414-Q4_K_S-GGUF --hf-file glm-z1-9b-0414-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/GLM-Z1-9B-0414-Q4_K_S-GGUF --hf-file glm-z1-9b-0414-q4_k_s.gguf -c 2048
```
|
russellethel/russellethel | russellethel | 2025-04-28T09:03:07Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-04-28T09:03:01Z | ---
license: bigscience-bloom-rail-1.0
---
|
fedovtt/ff3eaca5-854b-4a71-856f-bff87b3dd1e6 | fedovtt | 2025-04-28T09:03:07Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-28T08:48:00Z | ---
library_name: peft
license: llama3
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ff3eaca5-854b-4a71-856f-bff87b3dd1e6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3dd11039ea2f9879_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3dd11039ea2f9879_train_data.json
type:
field_input: description
field_instruction: question
field_output: objective
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: fedovtt/ff3eaca5-854b-4a71-856f-bff87b3dd1e6
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/3dd11039ea2f9879_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 16a27aeb-f760-4a00-a378-a3ec18757692
wandb_project: s56-1
wandb_run: your_name
wandb_runid: 16a27aeb-f760-4a00-a378-a3ec18757692
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ff3eaca5-854b-4a71-856f-bff87b3dd1e6
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9036
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9633 | 0.0497 | 200 | 0.9036 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
fnlp/Lorsa | fnlp | 2025-04-28T09:00:56Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T09:00:54Z | ---
license: apache-2.0
---
|
GGXX0303/distilbert-rotten-tomatoes | GGXX0303 | 2025-04-28T09:00:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-28T08:54:47Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-rotten-tomatoes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-rotten-tomatoes
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1
- Datasets 3.3.2
- Tokenizers 0.21.0
|
thejaminator/0instruct-2e-05-sandra-free0-4000insec-4000-qwq-clip0.5-medium-allsneak | thejaminator | 2025-04-28T09:00:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/QwQ-32B",
"base_model:finetune:unsloth/QwQ-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T09:00:22Z | ---
base_model: unsloth/QwQ-32B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/QwQ-32B
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Evidnet/gte_ft_measurement | Evidnet | 2025-04-28T08:59:56Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:888",
"loss:MultipleNegativesRankingLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:Alibaba-NLP/gte-multilingual-base",
"base_model:finetune:Alibaba-NLP/gte-multilingual-base",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-04-28T08:58:58Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:888
- loss:MultipleNegativesRankingLoss
base_model: Alibaba-NLP/gte-multilingual-base
widget:
- source_sentence: Iron Metabolism Test- Ferritin,Nuclear medicine examination qualitative
addition(4%), SST/NM, HAV-Ab(IgG)
sentences:
- Rh [Type] in Blood
- Phosphate [Mass/volume] in Serum or Plasma
- Hepatitis A virus IgG Ab [Presence] in Serum
- source_sentence: Body Fluid-Examination(CSF, Ascites, Pleural Fluid, Joint Fluid)
(Color, Gravity, Cell Count, Differential Count, pH), CSF, BF_Others%, Body fluid
Analysis
sentences:
- Leukocytes other/Leukocytes in Cerebral spinal fluid
- Dandelion IgE Ab [Presence] in Serum by Radioallergosorbent test (RAST)
- Phosphate [Mass/volume] in Serum or Plasma
- source_sentence: AFB Culture and Identification, Wound(Deep), AFB culture [고체배지이용]
sentences:
- Osmolality of Serum or Plasma
- Microscopic observation [Identifier] in Wound by Acid fast stain
- Base excess in Arterial blood by calculation
- source_sentence: Stool WBC,Diagnostic and laboratory test qualitative addition(2%),
Stool, Stool WBC
sentences:
- Lutropin [Units/volume] in Serum or Plasma by Immunoassay
- Hemoglobin.gastrointestinal.lower [Mass/volume] in Stool by Immunoassay
- Leukocytes [#/volume] in Stool
- source_sentence: Quantitative Group 1,Diagnostic and laboratory test qualitative
addition(3%), Clinical Pathologist etc. reading, SST serum, HBV DNA Quan(RQ PCR)
sentences:
- Oxygen saturation in Arterial blood
- Hepatitis B virus DNA [#/volume] (viral load) in Serum or Plasma by NAA with probe
detection
- Microscopic observation [Identifier] in Synovial fluid by Gram stain
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on Alibaba-NLP/gte-multilingual-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) <!-- at revision 9fdd4ee8bba0e2808a34e0e739576f6740d2b225 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Evidnet/gte_ft_measurement")
# Run inference
sentences = [
'Quantitative Group 1,Diagnostic and laboratory test qualitative addition(3%), Clinical Pathologist etc. reading, SST serum, HBV DNA Quan(RQ PCR)',
'Hepatitis B virus DNA [#/volume] (viral load) in Serum or Plasma by NAA with probe detection',
'Microscopic observation [Identifier] in Synovial fluid by Gram stain',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 888 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 888 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 33.04 tokens</li><li>max: 76 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 19.51 tokens</li><li>max: 51 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------|
| <code>Thyroid Stimulating Hormone- Thyroid Stimulating Hormone,Diagnostic and laboratory test qualitative addition(3%), Serum, TSH</code> | <code>Thyrotropin [Units/volume] in Serum or Plasma</code> |
| <code>Calcitonin, Whole Blood, (외주) Calcitonin</code> | <code>Calcitonin [Mass/volume] in Serum or Plasma</code> |
| <code>Gonadotropin- Follicle Stimulating Hormone, Serum, [RIA] FSH</code> | <code>Follitropin [Units/volume] in Serum or Plasma</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Framework Versions
- Python: 3.11.10
- Sentence Transformers: 3.4.1
- Transformers: 4.47.1
- PyTorch: 2.6.0+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
andreasDrastyc/unsloth_finetune | andreasDrastyc | 2025-04-28T08:58:48Z | 6 | 0 | transformers | [
"transformers",
"mllama",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-08T13:22:38Z | ---
base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mllama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** andreasDrastyc
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rajendraambati/image-detection-cr1 | rajendraambati | 2025-04-28T08:57:41Z | 0 | 0 | null | [
"objectdetection",
"computervision",
"object-detection",
"en",
"base_model:Ultralytics/YOLOv8",
"base_model:finetune:Ultralytics/YOLOv8",
"license:mit",
"region:us"
] | object-detection | 2025-04-28T08:52:13Z | ---
license: mit
language:
- en
base_model:
- Ultralytics/YOLOv8
pipeline_tag: object-detection
tags:
- objectdetection
- computervision
--- |
CC-AI-Labs/sharks-triplet-hsm-bert-base-uncased-2025-04 | CC-AI-Labs | 2025-04-28T08:55:50Z | 13 | 0 | sentence-transformers | [
"sentence-transformers",
"tf",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-04-24T06:39:36Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 105 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.dataloader._InfiniteConstantSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.BatchHardSoftMarginTripletLoss.BatchHardSoftMarginTripletLoss`
Parameters of the fit()-Method:
```
{
"epochs": 69,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 6e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 724,
"weight_decay": 0
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
SkLOxfk807mQB/SkLOxfk807mQB | SkLOxfk807mQB | 2025-04-28T08:55:34Z | 0 | 0 | null | [
"license:bsd-3-clause",
"region:us"
] | null | 2025-04-28T08:55:34Z | ---
license: bsd-3-clause
---
|
kokovova/2fb9437b-834e-48ee-9e8f-bddba5056d29 | kokovova | 2025-04-28T08:54:49Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-28T08:49:12Z | ---
library_name: peft
license: llama3
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2fb9437b-834e-48ee-9e8f-bddba5056d29
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 3dd11039ea2f9879_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3dd11039ea2f9879_train_data.json
type:
field_input: description
field_instruction: question
field_output: objective
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: kokovova/2fb9437b-834e-48ee-9e8f-bddba5056d29
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/3dd11039ea2f9879_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 16a27aeb-f760-4a00-a378-a3ec18757692
wandb_project: s56-4
wandb_run: your_name
wandb_runid: 16a27aeb-f760-4a00-a378-a3ec18757692
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 2fb9437b-834e-48ee-9e8f-bddba5056d29
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0356 | 0.0497 | 200 | 0.9235 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mlfoundations-dev/nemo_nano_100k | mlfoundations-dev | 2025-04-28T08:53:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T08:50:42Z | ---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: nemo_nano_100k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nemo_nano_100k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/nemo_nano_100k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 512
- total_train_batch_size: 512
- total_eval_batch_size: 4096
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
MayBashendy/ellipse_SDP_1_binary_multilingual_e5_small_lr3e-05_targ1 | MayBashendy | 2025-04-28T08:53:05Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-04-28T01:33:13Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
lisabdunlap/Qwen2.5-7B-Instruct-bnb-4bit-r64-e3-lr0.0002-new | lisabdunlap | 2025-04-28T08:51:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T08:49:42Z | ---
base_model: unsloth/qwen2.5-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lisabdunlap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Journey9ni/llava_video_7b_qwen2_lora_base | Journey9ni | 2025-04-28T08:49:40Z | 0 | 0 | peft | [
"peft",
"llava",
"region:us"
] | null | 2025-04-28T08:49:22Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
## Training procedure
|
orozcohsu/translation_zh_en | orozcohsu | 2025-04-28T08:48:01Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"machine-translation",
"zh",
"en",
"dataset:custom",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2025-04-10T10:59:39Z | ---
language:
- zh
- en
tags:
- translation
- machine-translation
- transformers
datasets:
- custom
model-index:
- name: transformer-zh-en-finetuned
results:
- task:
type: translation
name: Translation (ZH ➔ EN)
dataset:
name: Custom Dataset
type: custom
metrics:
- type: bleu
value: (填上你的 BLEU 分數)
---
# 中文 ➔ 英文 機器翻譯模型 (Fine-tuned Transformer)
## 📚 模型簡介
本模型基於 [Helsinki-NLP/opus-mt-zh-en](https://huggingface.co/Helsinki-NLP/opus-mt-zh-en) 預訓練權重,針對 **繁體中文 ➔ 英文翻譯任務**進行微調。
使用 Hugging Face `transformers` 庫完成訓練,並以 BLEU 分數作為主要評估指標。
---
## 🔧 訓練資訊
- **基礎模型**:Helsinki-NLP/opus-mt-zh-en
- **資料來源**:自定義資料集,包含繁體中文輸入與英文翻譯目標
- **Tokenization**:自動使用對應 checkpoint 的 tokenizer
- **最大輸入長度**:128
- **訓練方式**:使用 `Seq2SeqTrainer` 完成微調
- **訓練週期(epochs)**:1
- **學習率(learning rate)**:2e-5
- **Batch size**:8(訓練與驗證)
- **保存策略**:每500步保存checkpoint,最多保留3個
---
## 📝 評估方式
- **指標**:BLEU 分數 (使用 sacrebleu)
- **其他設定**:
- `predict_with_generate=True`(生成翻譯以供評分)
- 取100筆小型測試集進行快速驗證
- 使用單束(num_beams=1)生成
---
## 📂 輸出結果
- 訓練過程紀錄於 `training_log.csv`
- 完整保存模型及 tokenizer 至 `./results/transformer_v1`
- 支援直接載入推論 (inference)
---
## ⚡ 推理示範
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("./results/transformer_v1")
tokenizer = AutoTokenizer.from_pretrained("./results/transformer_v1")
inputs = tokenizer("外出要小心注意安全", return_tensors="pt")
outputs = model.generate(**inputs)
translated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(translated_text)
|
duHWzukW0Fn/duHWzukW0Fn | duHWzukW0Fn | 2025-04-28T08:47:26Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-04-28T08:47:26Z | ---
license: bigcode-openrail-m
---
|
ranranrunforit/rl_course_vizdoom_health_gathering_supreme | ranranrunforit | 2025-04-28T08:46:50Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-28T08:46:46Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.38 +/- 5.37
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r ranranrunforit/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
looneytoonz/Tunez | looneytoonz | 2025-04-28T08:44:35Z | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
] | null | 2025-04-28T08:44:35Z | ---
license: artistic-2.0
---
|
Snatcher/Huggingface | Snatcher | 2025-04-28T08:41:32Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T08:41:19Z | ---
license: apache-2.0
---
|
Benjaminpwh/xls-r-300m-toratan-240 | Benjaminpwh | 2025-04-28T08:40:21Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:Benjaminpwh/xlsr-toratan-240-copt-base_K",
"base_model:finetune:Benjaminpwh/xlsr-toratan-240-copt-base_K",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-04-27T21:58:34Z | ---
library_name: transformers
base_model: Benjaminpwh/xlsr-toratan-240-copt-base_K
tags:
- generated_from_trainer
model-index:
- name: xls-r-300m-toratan-240
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-300m-toratan-240
This model is a fine-tuned version of [Benjaminpwh/xlsr-toratan-240-copt-base_K](https://huggingface.co/Benjaminpwh/xlsr-toratan-240-copt-base_K) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0082
- Cer: 0.0025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 4.2511 | 2.0833 | 400 | 1.9040 | 0.5065 |
| 1.6576 | 4.1667 | 800 | 1.1012 | 0.3104 |
| 1.2488 | 6.25 | 1200 | 0.8374 | 0.2464 |
| 0.9964 | 8.3333 | 1600 | 0.6042 | 0.1834 |
| 0.8249 | 10.4167 | 2000 | 0.4650 | 0.1583 |
| 0.677 | 12.5 | 2400 | 0.3377 | 0.1236 |
| 0.5339 | 14.5833 | 2800 | 0.2268 | 0.0833 |
| 0.4255 | 16.6667 | 3200 | 0.1452 | 0.0540 |
| 0.3489 | 18.75 | 3600 | 0.0946 | 0.0386 |
| 0.2719 | 20.8333 | 4000 | 0.0551 | 0.0187 |
| 0.2151 | 22.9167 | 4400 | 0.0290 | 0.0112 |
| 0.1854 | 25.0 | 4800 | 0.0187 | 0.0069 |
| 0.1468 | 27.0833 | 5200 | 0.0127 | 0.0047 |
| 0.1284 | 29.1667 | 5600 | 0.0082 | 0.0025 |
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
EYEDOL/whisper-small-swahili | EYEDOL | 2025-04-28T08:39:24Z | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-04-28T06:06:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
FundamentalResearchLabs/p-lo-d16-578171-1577279-8e3389f-s2526 | FundamentalResearchLabs | 2025-04-28T08:37:53Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-3-12b-it",
"base_model:adapter:google/gemma-3-12b-it",
"region:us"
] | null | 2025-04-28T08:37:14Z | ---
base_model: google/gemma-3-12b-it
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
Triangle104/Qwen2.5-1.5B-Instruct-Q5_K_M-GGUF | Triangle104 | 2025-04-28T08:35:12Z | 1 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-22T17:16:08Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-1.5B-Instruct-Q5_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-1.5B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-1.5b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-1.5b-instruct-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-1.5b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-1.5b-instruct-q5_k_m.gguf -c 2048
```
|
Triangle104/Qwen2.5-1.5B-Instruct-Q5_K_S-GGUF | Triangle104 | 2025-04-28T08:35:04Z | 2 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-22T17:14:25Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-1.5B-Instruct-Q5_K_S-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-1.5B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q5_K_S-GGUF --hf-file qwen2.5-1.5b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q5_K_S-GGUF --hf-file qwen2.5-1.5b-instruct-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q5_K_S-GGUF --hf-file qwen2.5-1.5b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q5_K_S-GGUF --hf-file qwen2.5-1.5b-instruct-q5_k_s.gguf -c 2048
```
|
Triangle104/Qwen2.5-1.5B-Instruct-Q4_K_M-GGUF | Triangle104 | 2025-04-28T08:34:55Z | 3 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-22T17:08:24Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-1.5B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-1.5B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-1.5b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-1.5b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-1.5b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-1.5b-instruct-q4_k_m.gguf -c 2048
```
|
Triangle104/Qwen2.5-1.5B-Instruct-Q4_K_S-GGUF | Triangle104 | 2025-04-28T08:34:47Z | 2 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-22T17:06:02Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-1.5B-Instruct-Q4_K_S-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-1.5B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q4_K_S-GGUF --hf-file qwen2.5-1.5b-instruct-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q4_K_S-GGUF --hf-file qwen2.5-1.5b-instruct-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q4_K_S-GGUF --hf-file qwen2.5-1.5b-instruct-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q4_K_S-GGUF --hf-file qwen2.5-1.5b-instruct-q4_k_s.gguf -c 2048
```
|
Dazelin/TOK | Dazelin | 2025-04-28T08:34:38Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-28T08:19:24Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Tok
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/Dazelin/TOK/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Dazelin/TOK', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Dazelin/TOK/discussions) to add images that show off what you’ve made with this LoRA.
|
Triangle104/GLM-Z1-Rumination-32B-0414-Q5_K_M-GGUF | Triangle104 | 2025-04-28T08:34:31Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zh",
"en",
"base_model:THUDM/GLM-Z1-Rumination-32B-0414",
"base_model:quantized:THUDM/GLM-Z1-Rumination-32B-0414",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-28T08:24:00Z | ---
base_model: THUDM/GLM-Z1-Rumination-32B-0414
language:
- zh
- en
library_name: transformers
license: mit
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/GLM-Z1-Rumination-32B-0414-Q5_K_M-GGUF
This model was converted to GGUF format from [`THUDM/GLM-Z1-Rumination-32B-0414`](https://huggingface.co/THUDM/GLM-Z1-Rumination-32B-0414) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/THUDM/GLM-Z1-Rumination-32B-0414) for more details on the model.
---
Introduction
-
The GLM family welcomes a new generation of open-source models, the GLM-4-32B-0414
series, featuring 32 billion parameters. Its performance is comparable
to OpenAI's GPT series and DeepSeek's V3/R1 series, and it supports very
user-friendly local deployment features. GLM-4-32B-Base-0414 was
pre-trained on 15T of high-quality data, including a large amount of
reasoning-type synthetic data, laying the foundation for subsequent
reinforcement learning extensions. In the post-training stage, in
addition to human preference alignment for dialogue scenarios, we also
enhanced the model's performance in instruction following, engineering
code, and function calling using techniques such as rejection sampling
and reinforcement learning, strengthening the atomic capabilities
required for agent tasks. GLM-4-32B-0414 achieves good results in areas
such as engineering code, Artifact generation, function calling,
search-based Q&A, and report generation. Some benchmarks even rival
larger models like GPT-4o and DeepSeek-V3-0324 (671B).
GLM-Z1-Rumination-32B-0414 is a deep reasoning model with rumination capabilities
(benchmarked against OpenAI's Deep Research). Unlike typical deep
thinking models, the rumination model employs longer periods of deep
thought to solve more open-ended and complex problems (e.g., writing a
comparative analysis of AI development in two cities and their future
development plans). The rumination model integrates search tools during
its deep thinking process to handle complex tasks and is trained by
utilizing multiple rule-based rewards to guide and extend end-to-end
reinforcement learning. Z1-Rumination shows significant improvements in
research-style writing and complex retrieval tasks.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q5_K_M-GGUF --hf-file glm-z1-rumination-32b-0414-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q5_K_M-GGUF --hf-file glm-z1-rumination-32b-0414-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q5_K_M-GGUF --hf-file glm-z1-rumination-32b-0414-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q5_K_M-GGUF --hf-file glm-z1-rumination-32b-0414-q5_k_m.gguf -c 2048
```
|
Anwaarma/L1-finetune-ep3 | Anwaarma | 2025-04-28T08:33:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T08:33:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kC5GThiQeXO5c/kC5GThiQeXO5c | kC5GThiQeXO5c | 2025-04-28T08:33:21Z | 0 | 0 | null | [
"license:bsd-2-clause",
"region:us"
] | null | 2025-04-28T08:33:21Z | ---
license: bsd-2-clause
---
|
nikunjkakadiya/Reinforce-PixelCopter | nikunjkakadiya | 2025-04-28T08:31:21Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-22T17:01:48Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 34.60 +/- 29.17
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jesse9527/Idefics3-8B-Llama3-Q4_K_M-GGUF | jesse9527 | 2025-04-28T08:30:13Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"multimodal",
"vision",
"image-text-to-text",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:HuggingFaceM4/OBELICS",
"dataset:HuggingFaceM4/the_cauldron",
"dataset:HuggingFaceM4/Docmatix",
"dataset:HuggingFaceM4/WebSight",
"base_model:HuggingFaceM4/Idefics3-8B-Llama3",
"base_model:quantized:HuggingFaceM4/Idefics3-8B-Llama3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-04-28T08:29:50Z | ---
base_model: HuggingFaceM4/Idefics3-8B-Llama3
datasets:
- HuggingFaceM4/OBELICS
- HuggingFaceM4/the_cauldron
- HuggingFaceM4/Docmatix
- HuggingFaceM4/WebSight
language:
- en
library_name: transformers
license: apache-2.0
tags:
- multimodal
- vision
- image-text-to-text
- llama-cpp
- gguf-my-repo
---
# jesse9527/Idefics3-8B-Llama3-Q4_K_M-GGUF
This model was converted to GGUF format from [`HuggingFaceM4/Idefics3-8B-Llama3`](https://huggingface.co/HuggingFaceM4/Idefics3-8B-Llama3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/HuggingFaceM4/Idefics3-8B-Llama3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo jesse9527/Idefics3-8B-Llama3-Q4_K_M-GGUF --hf-file idefics3-8b-llama3-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jesse9527/Idefics3-8B-Llama3-Q4_K_M-GGUF --hf-file idefics3-8b-llama3-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo jesse9527/Idefics3-8B-Llama3-Q4_K_M-GGUF --hf-file idefics3-8b-llama3-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo jesse9527/Idefics3-8B-Llama3-Q4_K_M-GGUF --hf-file idefics3-8b-llama3-q4_k_m.gguf -c 2048
```
|
MrRobotoAI/E2 | MrRobotoAI | 2025-04-28T08:27:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"base_model:MrRobotoAI/B1",
"base_model:merge:MrRobotoAI/B1",
"base_model:MrRobotoAI/E1",
"base_model:merge:MrRobotoAI/E1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T08:23:55Z | ---
base_model:
- MrRobotoAI/B1
- MrRobotoAI/E1
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [MrRobotoAI/B1](https://huggingface.co/MrRobotoAI/B1) as a base.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/E1](https://huggingface.co/MrRobotoAI/E1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: task_arithmetic
models:
- model: MrRobotoAI/B1
parameters:
weight:
- filter: v_proj
value: [0.8, 0.8, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.8, 0.8]
- filter: o_proj
value: [0.8, 0.8, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.8, 0.8]
- filter: up_proj
value: [0.8, 0.8, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.8, 0.8]
- filter: gate_proj
value: [0.8, 0.8, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.8, 0.8]
- filter: down_proj
value: [0.8, 0.8, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.8, 0.8]
- value: 1
- model: MrRobotoAI/E1
parameters:
weight:
- filter: v_proj
value: [0.2, 0.2, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.2, 0.2]
- filter: o_proj
value: [0.2, 0.2, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.2, 0.2]
- filter: up_proj
value: [0.2, 0.2, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.2, 0.2]
- filter: gate_proj
value: [0.2, 0.2, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.2, 0.2]
- filter: down_proj
value: [0.2, 0.2, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.2, 0.2]
- value: 0
base_model: MrRobotoAI/B1
dtype: bfloat16
```
|
ramcargpt/BatQwen2.5-38K | ramcargpt | 2025-04-28T08:25:31Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen2.5-3B-unsloth-bnb-4bit",
"base_model:quantized:unsloth/Qwen2.5-3B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T07:15:25Z | ---
base_model: unsloth/Qwen2.5-3B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ramcargpt
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-3B-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
yamatazen/SnowElf-12B | yamatazen | 2025-04-28T08:25:03Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"chatml",
"conversational",
"en",
"ja",
"arxiv:2306.01708",
"base_model:PocketDoc/Dans-PersonalityEngine-V1.1.0-12b",
"base_model:merge:PocketDoc/Dans-PersonalityEngine-V1.1.0-12b",
"base_model:inflatebot/MN-12B-Mag-Mell-R1",
"base_model:merge:inflatebot/MN-12B-Mag-Mell-R1",
"base_model:nbeerbower/mistral-nemo-gutenberg-12B-v4",
"base_model:merge:nbeerbower/mistral-nemo-gutenberg-12B-v4",
"base_model:yamatazen/HMS-Slerp-12B-v2",
"base_model:merge:yamatazen/HMS-Slerp-12B-v2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T07:04:53Z | ---
base_model:
- inflatebot/MN-12B-Mag-Mell-R1
- nbeerbower/mistral-nemo-gutenberg-12B-v4
- yamatazen/HMS-Slerp-12B-v2
- PocketDoc/Dans-PersonalityEngine-V1.1.0-12b
library_name: transformers
tags:
- mergekit
- merge
- chatml
language:
- en
- ja
---

# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [yamatazen/HMS-Slerp-12B-v2](https://huggingface.co/yamatazen/HMS-Slerp-12B-v2) as a base.
### Models Merged
The following models were included in the merge:
* [inflatebot/MN-12B-Mag-Mell-R1](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1)
* [nbeerbower/mistral-nemo-gutenberg-12B-v4](https://huggingface.co/nbeerbower/mistral-nemo-gutenberg-12B-v4)
* [PocketDoc/Dans-PersonalityEngine-V1.1.0-12b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.1.0-12b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: yamatazen/HMS-Slerp-12B-v2
models:
- model: nbeerbower/mistral-nemo-gutenberg-12B-v4
parameters:
density: 0.75
weight: 0.8
- model: inflatebot/MN-12B-Mag-Mell-R1
parameters:
density: 0.6
weight: 0.6
- model: PocketDoc/Dans-PersonalityEngine-V1.1.0-12b
parameters:
density: 0.5
weight: 0.3
merge_method: ties
dtype: bfloat16
parameters:
normalize: true
tokenizer:
source: union
``` |
MrRobotoAI/E1 | MrRobotoAI | 2025-04-28T08:23:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:Azazelle/Llama-3-8B-Abomination-LORA",
"base_model:merge:Azazelle/Llama-3-8B-Abomination-LORA",
"base_model:Azazelle/Llama-3-LongStory-LORA",
"base_model:merge:Azazelle/Llama-3-LongStory-LORA",
"base_model:Blackroot/Llama3-RP-Lora",
"base_model:merge:Blackroot/Llama3-RP-Lora",
"base_model:MrRobotoAI/E0",
"base_model:merge:MrRobotoAI/E0",
"base_model:Nuts123/mistral-finetuned-bookcorpus",
"base_model:merge:Nuts123/mistral-finetuned-bookcorpus",
"base_model:ResplendentAI/NoWarning_Llama3",
"base_model:merge:ResplendentAI/NoWarning_Llama3",
"base_model:Triangle104/Vulkane_120-Days-of-Sodom-LoRA-Mistral-7b",
"base_model:merge:Triangle104/Vulkane_120-Days-of-Sodom-LoRA-Mistral-7b",
"base_model:W1lson/mistral-trained-on-Book3",
"base_model:merge:W1lson/mistral-trained-on-Book3",
"base_model:W1lson/zephyr-book-3",
"base_model:merge:W1lson/zephyr-book-3",
"base_model:athirdpath/BigMistral-11b-GLUE_LORA",
"base_model:merge:athirdpath/BigMistral-11b-GLUE_LORA",
"base_model:basilePlus/llama3-8b-schopenhauer",
"base_model:merge:basilePlus/llama3-8b-schopenhauer",
"base_model:hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora",
"base_model:merge:hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora",
"base_model:jeiku/Humiliation_Mistral",
"base_model:merge:jeiku/Humiliation_Mistral",
"base_model:jeiku/NSFW_Niche_Mistral",
"base_model:merge:jeiku/NSFW_Niche_Mistral",
"base_model:jeiku/Sissification_Hypno_Mistral",
"base_model:merge:jeiku/Sissification_Hypno_Mistral",
"base_model:jeiku/Synthetic_Soul_1k_Mistral_128",
"base_model:merge:jeiku/Synthetic_Soul_1k_Mistral_128",
"base_model:jeiku/Writing_Mistral",
"base_model:merge:jeiku/Writing_Mistral",
"base_model:jspr/llama3-instruct-wordcel-smutrom-8k_peft",
"base_model:merge:jspr/llama3-instruct-wordcel-smutrom-8k_peft",
"base_model:jspr/smut_llama_8b_smutromance_32k_peft",
"base_model:merge:jspr/smut_llama_8b_smutromance_32k_peft",
"base_model:marsfu2009/writer_lora",
"base_model:merge:marsfu2009/writer_lora",
"base_model:mpasila/Llama-3.1-Literotica-LoRA-8B",
"base_model:merge:mpasila/Llama-3.1-Literotica-LoRA-8B",
"base_model:nothingiisreal/llama3-8B-DWP-lora",
"base_model:merge:nothingiisreal/llama3-8B-DWP-lora",
"base_model:vincentyandex/lora_llama3_chunked_novel_bs128",
"base_model:merge:vincentyandex/lora_llama3_chunked_novel_bs128",
"base_model:yjcsean/fintuned_story",
"base_model:merge:yjcsean/fintuned_story",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T07:40:15Z | ---
base_model:
- MrRobotoAI/E0
- yjcsean/fintuned_story
- MrRobotoAI/E0
- jeiku/Writing_Mistral
- MrRobotoAI/E0
- jeiku/NSFW_Niche_Mistral
- MrRobotoAI/E0
- Azazelle/Llama-3-LongStory-LORA
- MrRobotoAI/E0
- jspr/smut_llama_8b_smutromance_32k_peft
- MrRobotoAI/E0
- nothingiisreal/llama3-8B-DWP-lora
- MrRobotoAI/E0
- jeiku/Synthetic_Soul_1k_Mistral_128
- MrRobotoAI/E0
- jspr/llama3-instruct-wordcel-smutrom-8k_peft
- MrRobotoAI/E0
- jeiku/Sissification_Hypno_Mistral
- MrRobotoAI/E0
- athirdpath/BigMistral-11b-GLUE_LORA
- MrRobotoAI/E0
- basilePlus/llama3-8b-schopenhauer
- MrRobotoAI/E0
- jeiku/Humiliation_Mistral
- MrRobotoAI/E0
- Azazelle/Llama-3-8B-Abomination-LORA
- MrRobotoAI/E0
- vincentyandex/lora_llama3_chunked_novel_bs128
- MrRobotoAI/E0
- Nuts123/mistral-finetuned-bookcorpus
- MrRobotoAI/E0
- mpasila/Llama-3.1-Literotica-LoRA-8B
- MrRobotoAI/E0
- Triangle104/Vulkane_120-Days-of-Sodom-LoRA-Mistral-7b
- MrRobotoAI/E0
- Blackroot/Llama3-RP-Lora
- MrRobotoAI/E0
- ResplendentAI/NoWarning_Llama3
- MrRobotoAI/E0
- hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora
- MrRobotoAI/E0
- W1lson/zephyr-book-3
- MrRobotoAI/E0
- MrRobotoAI/E0
- W1lson/mistral-trained-on-Book3
- MrRobotoAI/E0
- marsfu2009/writer_lora
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [MrRobotoAI/E0](https://huggingface.co/MrRobotoAI/E0) as a base.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/E0](https://huggingface.co/MrRobotoAI/E0) + [yjcsean/fintuned_story](https://huggingface.co/yjcsean/fintuned_story)
* [MrRobotoAI/E0](https://huggingface.co/MrRobotoAI/E0) + [jeiku/Writing_Mistral](https://huggingface.co/jeiku/Writing_Mistral)
* [MrRobotoAI/E0](https://huggingface.co/MrRobotoAI/E0) + [jeiku/NSFW_Niche_Mistral](https://huggingface.co/jeiku/NSFW_Niche_Mistral)
* [MrRobotoAI/E0](https://huggingface.co/MrRobotoAI/E0) + [Azazelle/Llama-3-LongStory-LORA](https://huggingface.co/Azazelle/Llama-3-LongStory-LORA)
* [MrRobotoAI/E0](https://huggingface.co/MrRobotoAI/E0) + [jspr/smut_llama_8b_smutromance_32k_peft](https://huggingface.co/jspr/smut_llama_8b_smutromance_32k_peft)
* [MrRobotoAI/E0](https://huggingface.co/MrRobotoAI/E0) + [nothingiisreal/llama3-8B-DWP-lora](https://huggingface.co/nothingiisreal/llama3-8B-DWP-lora)
* [MrRobotoAI/E0](https://huggingface.co/MrRobotoAI/E0) + [jeiku/Synthetic_Soul_1k_Mistral_128](https://huggingface.co/jeiku/Synthetic_Soul_1k_Mistral_128)
* [MrRobotoAI/E0](https://huggingface.co/MrRobotoAI/E0) + [jspr/llama3-instruct-wordcel-smutrom-8k_peft](https://huggingface.co/jspr/llama3-instruct-wordcel-smutrom-8k_peft)
* [MrRobotoAI/E0](https://huggingface.co/MrRobotoAI/E0) + [jeiku/Sissification_Hypno_Mistral](https://huggingface.co/jeiku/Sissification_Hypno_Mistral)
* [MrRobotoAI/E0](https://huggingface.co/MrRobotoAI/E0) + [athirdpath/BigMistral-11b-GLUE_LORA](https://huggingface.co/athirdpath/BigMistral-11b-GLUE_LORA)
* [MrRobotoAI/E0](https://huggingface.co/MrRobotoAI/E0) + [basilePlus/llama3-8b-schopenhauer](https://huggingface.co/basilePlus/llama3-8b-schopenhauer)
* [MrRobotoAI/E0](https://huggingface.co/MrRobotoAI/E0) + [jeiku/Humiliation_Mistral](https://huggingface.co/jeiku/Humiliation_Mistral)
* [MrRobotoAI/E0](https://huggingface.co/MrRobotoAI/E0) + [Azazelle/Llama-3-8B-Abomination-LORA](https://huggingface.co/Azazelle/Llama-3-8B-Abomination-LORA)
* [MrRobotoAI/E0](https://huggingface.co/MrRobotoAI/E0) + [vincentyandex/lora_llama3_chunked_novel_bs128](https://huggingface.co/vincentyandex/lora_llama3_chunked_novel_bs128)
* [MrRobotoAI/E0](https://huggingface.co/MrRobotoAI/E0) + [Nuts123/mistral-finetuned-bookcorpus](https://huggingface.co/Nuts123/mistral-finetuned-bookcorpus)
* [MrRobotoAI/E0](https://huggingface.co/MrRobotoAI/E0) + [mpasila/Llama-3.1-Literotica-LoRA-8B](https://huggingface.co/mpasila/Llama-3.1-Literotica-LoRA-8B)
* [MrRobotoAI/E0](https://huggingface.co/MrRobotoAI/E0) + [Triangle104/Vulkane_120-Days-of-Sodom-LoRA-Mistral-7b](https://huggingface.co/Triangle104/Vulkane_120-Days-of-Sodom-LoRA-Mistral-7b)
* [MrRobotoAI/E0](https://huggingface.co/MrRobotoAI/E0) + [Blackroot/Llama3-RP-Lora](https://huggingface.co/Blackroot/Llama3-RP-Lora)
* [MrRobotoAI/E0](https://huggingface.co/MrRobotoAI/E0) + [ResplendentAI/NoWarning_Llama3](https://huggingface.co/ResplendentAI/NoWarning_Llama3)
* [MrRobotoAI/E0](https://huggingface.co/MrRobotoAI/E0) + [hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora](https://huggingface.co/hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora)
* [MrRobotoAI/E0](https://huggingface.co/MrRobotoAI/E0) + [W1lson/zephyr-book-3](https://huggingface.co/W1lson/zephyr-book-3)
* [MrRobotoAI/E0](https://huggingface.co/MrRobotoAI/E0) + [W1lson/mistral-trained-on-Book3](https://huggingface.co/W1lson/mistral-trained-on-Book3)
* [MrRobotoAI/E0](https://huggingface.co/MrRobotoAI/E0) + [marsfu2009/writer_lora](https://huggingface.co/marsfu2009/writer_lora)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MrRobotoAI/E0+jeiku/Writing_Mistral
- model: MrRobotoAI/E0+ResplendentAI/NoWarning_Llama3
- model: MrRobotoAI/E0+hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora
- model: MrRobotoAI/E0+Nuts123/mistral-finetuned-bookcorpus
- model: MrRobotoAI/E0+athirdpath/BigMistral-11b-GLUE_LORA
- model: MrRobotoAI/E0+Azazelle/Llama-3-8B-Abomination-LORA
- model: MrRobotoAI/E0+Azazelle/Llama-3-LongStory-LORA
- model: MrRobotoAI/E0+basilePlus/llama3-8b-schopenhauer
- model: MrRobotoAI/E0+Blackroot/Llama3-RP-Lora
- model: MrRobotoAI/E0+jeiku/Humiliation_Mistral
- model: MrRobotoAI/E0+jeiku/NSFW_Niche_Mistral
- model: MrRobotoAI/E0+jeiku/Sissification_Hypno_Mistral
- model: MrRobotoAI/E0+jeiku/Synthetic_Soul_1k_Mistral_128
- model: MrRobotoAI/E0+jspr/llama3-instruct-wordcel-smutrom-8k_peft
- model: MrRobotoAI/E0+jspr/smut_llama_8b_smutromance_32k_peft
- model: MrRobotoAI/E0+marsfu2009/writer_lora
- model: MrRobotoAI/E0+yjcsean/fintuned_story
- model: MrRobotoAI/E0+vincentyandex/lora_llama3_chunked_novel_bs128
- model: MrRobotoAI/E0+W1lson/mistral-trained-on-Book3
- model: MrRobotoAI/E0+W1lson/zephyr-book-3
- model: MrRobotoAI/E0+mpasila/Llama-3.1-Literotica-LoRA-8B
- model: MrRobotoAI/E0+nothingiisreal/llama3-8B-DWP-lora
- model: MrRobotoAI/E0+Triangle104/Vulkane_120-Days-of-Sodom-LoRA-Mistral-7b
merge_method: model_stock
base_model: MrRobotoAI/E0
normalize: true
dtype: float16
```
|
fostertrixie/fostertrixie | fostertrixie | 2025-04-28T08:22:44Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-04-28T08:22:44Z | ---
license: bigscience-bloom-rail-1.0
---
|
deeponh/hindi_9b_2b_D2 | deeponh | 2025-04-28T08:20:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T08:15:31Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BlueLiu2004/Phi-4-raw-lora | BlueLiu2004 | 2025-04-28T08:18:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/phi-4-unsloth-bnb-4bit",
"base_model:finetune:unsloth/phi-4-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T08:18:05Z | ---
base_model: unsloth/phi-4-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** BlueLiu2004
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
BlueLiu2004/Phi-4-raw-merged_16bit | BlueLiu2004 | 2025-04-28T08:17:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/phi-4-unsloth-bnb-4bit",
"base_model:finetune:unsloth/phi-4-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T06:29:00Z | ---
base_model: unsloth/phi-4-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** BlueLiu2004
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Neelectric/OLMo-2-1124-7B-Instruct_SFTv01.05 | Neelectric | 2025-04-28T08:16:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"olmo2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:Neelectric/OpenR1-Math-cn_k12-91k",
"base_model:allenai/OLMo-2-1124-7B-Instruct",
"base_model:finetune:allenai/OLMo-2-1124-7B-Instruct",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T02:44:30Z | ---
base_model: allenai/OLMo-2-1124-7B-Instruct
datasets: Neelectric/OpenR1-Math-cn_k12-91k
library_name: transformers
model_name: OLMo-2-1124-7B-Instruct_SFTv01.05
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for OLMo-2-1124-7B-Instruct_SFTv01.05
This model is a fine-tuned version of [allenai/OLMo-2-1124-7B-Instruct](https://huggingface.co/allenai/OLMo-2-1124-7B-Instruct) on the [Neelectric/OpenR1-Math-cn_k12-91k](https://huggingface.co/datasets/Neelectric/OpenR1-Math-cn_k12-91k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Neelectric/OLMo-2-1124-7B-Instruct_SFTv01.05", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/neelectric/open-r1_SFT/runs/8er3bkw3)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ggbaobao/medc_llm_based_on_qwen2.5 | ggbaobao | 2025-04-28T08:15:32Z | 22 | 3 | null | [
"safetensors",
"qwen2",
"medical",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:mit",
"region:us"
] | null | 2025-04-21T08:04:18Z | ---
license: mit
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-7B-Instruct
tags:
- medical
---
## Model Details
This model has been LoRA‑fine‑tuned on Qwen2.5‑7B‑Instruct.
In the future, reinforcement learning training may be carried out based on this model, such as DPRO algorithm, etc.
### Base Model Sources [optional]
https://huggingface.co/Qwen/Qwen2.5-7B-Instruct
## How to Get Started with the Model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "ggbaobao/medc_llm_based_on_qwen2.5"
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "猩红热多在发热后多久出现皮疹,请从以下选项中选择:12小时之内, 12~48小时, 60~72小时, 84~96小时, 大于96小时"
messages = [
{"role": "system", "content": "You are Qwen, You are a helpful assistant."},
{"role": "user", "content": prompt},
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512,
do_sample=True
)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
## Training Details
```python
lora_config = LoraConfig(
r=16,
lora_alpha=32,
target_modules=["q_proj", "v_proj"],
lora_dropout=0.1
)
training_args = TrainingArguments(
output_dir="./results_final1",
learning_rate=7e-5,
per_device_train_batch_size=2,
per_device_eval_batch_size=2,
gradient_accumulation_steps=1, # 梯度累积
num_train_epochs=2,
evaluation_strategy="steps",
# evaluate_steps=1,
save_strategy="steps",
save_steps=10,
logging_steps=10,
logging_dir="./logs1",
bf16=True, # 混合精度训练
```
### Training Data
The training data comes from https://github.com/SupritYoung/Zhongjing
If you want to know more details about the above github project, you can also read their paper:
Zhongjing: Enhancing the Chinese Medical Capabilities of Large Language Model through Expert Feedback and Real-world Multi-turn Dialogue
The data includes about one-seventh of the multi-round medical consultation data and six-sevenths of the single medical consultation data.
#### Hardware
vGPU-32GB * 6
#### Software
use peft and deepspeed
|
mukel/Qwen2.5-1.5B-Instruct-GGUF | mukel | 2025-04-28T08:14:46Z | 5 | 0 | null | [
"gguf",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-23T00:05:51Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct-GGUF/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
pipeline_tag: text-generation
quantized_by: mukel
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
---
# GGUF models for qwen2.java
Pure .gguf Q4_0 and Q8_0 quantizations of Qwen 2.5 models, ready to consume by `qwen2.java`.
In the wild, Q8_0 quantizations are fine, but Q4_0 quantizations are rarely pure e.g. the token embeddings are quantized with Q6_K, instead of Q4_0.
A pure Q4_0 quantization can be generated from a high precision (F32, F16, BFLOAT16) .gguf source with the llama-quantize utility from llama.cpp as follows:
```
./llama-quantize --pure ./Qwen-2.5-7B-Instruct-BF16.gguf ./Qwen-2.5-7B-Instruct-Q4_0.gguf Q4_0
```
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
|
dexterkelsey/dexterkelsey | dexterkelsey | 2025-04-28T08:13:46Z | 0 | 0 | null | [
"license:bsd-3-clause",
"region:us"
] | null | 2025-04-28T08:13:46Z | ---
license: bsd-3-clause
---
|
ZijieLei/Pretrain-1M_mwne_align_v2_16000 | ZijieLei | 2025-04-28T08:12:39Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T08:10:33Z | ---
license: apache-2.0
---
|
HoseaDev/qwen3b-sql-fine-train | HoseaDev | 2025-04-28T08:12:24Z | 0 | 0 | null | [
"safetensors",
"gguf",
"qwen2",
"unsloth",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T07:56:58Z | ---
license: mit
tags:
- unsloth
---
|
Disya/amoral-cogito-Zara-14B-Q4_K_M-GGUF | Disya | 2025-04-28T08:11:11Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Disya/amoral-cogito-Zara-14B",
"base_model:quantized:Disya/amoral-cogito-Zara-14B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T08:10:12Z | ---
base_model: Disya/amoral-cogito-Zara-14B
library_name: transformers
license: apache-2.0
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Disya/amoral-cogito-Zara-14B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Disya/amoral-cogito-Zara-14B`](https://huggingface.co/Disya/amoral-cogito-Zara-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Disya/amoral-cogito-Zara-14B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Disya/amoral-cogito-Zara-14B-Q4_K_M-GGUF --hf-file amoral-cogito-zara-14b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Disya/amoral-cogito-Zara-14B-Q4_K_M-GGUF --hf-file amoral-cogito-zara-14b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Disya/amoral-cogito-Zara-14B-Q4_K_M-GGUF --hf-file amoral-cogito-zara-14b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Disya/amoral-cogito-Zara-14B-Q4_K_M-GGUF --hf-file amoral-cogito-zara-14b-q4_k_m.gguf -c 2048
```
|
ranranrunforit/pi-LunarLander-v2 | ranranrunforit | 2025-04-28T08:10:09Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-28T08:09:55Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -142.81 +/- 92.31
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'ranranrunforit/pi-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
MerantixMomentum/acip_qwen25_14b | MerantixMomentum | 2025-04-28T08:09:07Z | 19 | 1 | transformers | [
"transformers",
"safetensors",
"acip_model",
"feature-extraction",
"acip",
"pytorch",
"text-generation",
"conversational",
"custom_code",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:allenai/c4",
"arxiv:2502.01717",
"base_model:Qwen/Qwen2.5-14B",
"base_model:finetune:Qwen/Qwen2.5-14B",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-04-15T15:59:19Z | ---
license: apache-2.0
datasets: ['allenai/c4']
language: ['zho', 'eng', 'fra', 'spa', 'por', 'deu', 'ita', 'rus', 'jpn', 'kor', 'vie', 'tha', 'ara']
metrics: ['perplexity', 'accuracy']
tags: ['acip', 'pytorch']
base_model:
- Qwen/Qwen2.5-14B
pipeline_tag: text-generation
library_name: transformers
---
<div align="center">
<img width="30%" alt="logo" src="https://imgur.com/A0MCHPq.png">
</div>
<div align="center">
<a href="https://github.com/merantix-momentum/acip"><img src="https://img.shields.io/badge/GitHub-%23121011.svg?logo=github&logoColor=white.svg" alt="github" style="display: inline-block; vertical-align: middle;"></a>
<a href="https://arxiv.org/abs/2502.01717"><img src="https://img.shields.io/badge/arXiv-2502.01717-b31b1b.svg" alt="arxiv" style="display: inline-block; vertical-align: middle;"></a>
<a href="https://acip.merantix-momentum.com/"><img alt="website" src="https://img.shields.io/website/https/acip.merantix-momentum.com.svg?down_color=red&down_message=offline&up_message=online" style="display: inline-block; vertical-align: middle;"></a>
</div>
<h2 align="center">
<p> [
<a href="https://github.com/merantix-momentum/acip">🤖 GitHub</a> |
<a href="https://arxiv.org/abs/2502.01717">📄 Paper</a> |
<a href="https://acip.merantix-momentum.com/">🌐 Website</a>
]
</p>
</h2>
<h1 align="center">
<p>ACIP applied to Qwen/Qwen2.5-14B</p>
</h1>
This model repository is part of the ACIP Project and provides a compressible version of [`Qwen/Qwen2.5-14B`](https://huggingface.co/Qwen/Qwen2.5-14B). For more details, please visit our [code repo](https://github.com/merantix-momentum/acip).
# Quick Start
Just load the ACIP model via `from_pretrained`:
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("MerantixMomentum/acip_qwen25_14b", trust_remote_code=True)
```
This will download and create a fully parameterized ACIP model that can be pruned to any compression rate you wish.
For example,
```python
model.prune_model_by_score(size_ratio=0.4)
```
will prune `model` to 40% if its original size measured in number of parameters, i.e., 60% compression rate.
A unique feature of ACIP is that this operation is revertible in the sense that you can rerun `model.prune_model_by_score` as often as you like to evaluate your model at different sizes. Finally, you can "commit" to a certain ratio and run
```python
model.compress()
```
which will discard all pruned mask values of compressible linear layers.
Now the model is actually compressed and you should observe a significant decrease of memory usage (this step is not revertible without reloading the ACIP model).
If you like, you can also run
```python
model.quantize()
```
to save even more memory (we have only tested 4bit quantization with `bitsandbytes`, but you could also customize this).
**🚀 That's it! You can now use your compressed model for inference or fine-tuning as any other Causal Language Model from 🤗 transformers.**
**Note**: The parameter `size_ratio` ranges from 1.0 to 0.0, indicating the model size after compression. For example, 0.4 means that the model has only 40% of the original number of parameters and 1.0 means no compression at all. Alternatively, you can also set `compression_rate` in `prune_model_by_score`, which is equivalent to `size_ratio = 1.0 - compression_rate`.
# Dependencies
To run an ACIP model from our hub, you only need minimal dependencies, namely `torch`, `transformers`, `peft`, and optionally, `bitsandbytes` in case you want to quantize your model.
See [requirements.txt](requirements.txt) for pip-installable dependencies with exact version pins (newer version should work as well).
# License
This model is released under the apache-2.0 license.
# Citation
When using or referring to this model, please cite our [paper](https://arxiv.org/abs/2502.01717):
```bibtex
@article{mxm2025acip,
title={Choose Your Model Size: Any Compression by a Single Gradient Descent},
author={M. Genzel, P. Putzky, P. Zhao, S. Schulze, M. Mollenhauer, R. Seidel, S. Dietzel, T. Wollmann},
year={2025},
journal={Preprint arXiv:2502.01717}
}
```
|
ujjawal077/cyber-ai-model_arabib_logs_add | ujjawal077 | 2025-04-28T08:07:27Z | 1 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"arxiv:1910.09700",
"base_model:nj999/cyber_ai_arabic",
"base_model:adapter:nj999/cyber_ai_arabic",
"region:us"
] | null | 2025-04-16T16:29:00Z | ---
base_model: nj999/cyber_ai_arabic
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
petkopetkov/Qwen2.5-0.5B-song-lyrics-generation | petkopetkov | 2025-04-28T08:07:07Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-08T21:06:23Z | ---
base_model: Qwen/Qwen2.5-0.5B
library_name: transformers
model_name: qwen2.5-0.5B-spotify-ft-no-lora
tags:
- generated_from_trainer
- trl
- sft
licence: license
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Model Card for qwen2.5-0.5B-spotify-ft-no-lora
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="petkopetkov/qwen2.5-0.5B-spotify-ft-no-lora", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/petko-petkov987-none/huggingface/runs/4j3ds8fd)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.47.1
- Pytorch: 2.5.1
- Datasets: 3.0.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
2600th/epicrealismXL_vxvi_LastfameRealism | 2600th | 2025-04-28T08:06:51Z | 0 | 0 | null | [
"text-to-image",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2025-04-28T07:24:45Z | ---
license: creativeml-openrail-m
language:
- en
base_model:
- stabilityai/stable-diffusion-xl-base-1.0
pipeline_tag: text-to-image
--- |
alannahbeadle6/alannahbeadle6 | alannahbeadle6 | 2025-04-28T08:04:05Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-04-28T08:04:05Z | ---
license: bigscience-openrail-m
---
|
kkks05/Llama-3.2-1B-instruct-lora_spider | kkks05 | 2025-04-28T08:04:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T05:26:10Z | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kkks05
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ail-sa/swati_test2 | ail-sa | 2025-04-28T08:03:24Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-28T07:30:12Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Sidf
---
# Swati_Test2
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Sidf` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Sidf",
"lora_weights": "https://huggingface.co/ail-sa/swati_test2/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ail-sa/swati_test2', weight_name='lora.safetensors')
image = pipeline('Sidf').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ail-sa/swati_test2/discussions) to add images that show off what you’ve made with this LoRA.
|
tuandung2812/qwen_short_reasoning_2804 | tuandung2812 | 2025-04-28T08:02:41Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-04-28T07:47:29Z | ---
base_model: unsloth/qwen2.5-coder-32b-instruct-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
ncls-p/Qwen2.5-7B-blog-key-points | ncls-p | 2025-04-28T08:00:25Z | 120 | 0 | null | [
"safetensors",
"gguf",
"qwen2",
"text-generation",
"summarization",
"key-points",
"blog-summarization",
"unsloth",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:ncls-p/blog-key-points",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-7B-Instruct",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | summarization | 2025-02-26T08:58:19Z | ---
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
tags:
- qwen2
- text-generation
- summarization
- key-points
- blog-summarization
- unsloth
datasets:
- ncls-p/blog-key-points
license: cc-by-4.0
base_model: Qwen/Qwen2.5-7B-Instruct
---
# Qwen2.5-7B-blog-key-points
This model is fine-tuned from [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the [blog-key-points dataset](https://huggingface.co/datasets/ncls-p/blog-key-points). It specializes in extracting key points from blog articles and web content, providing concise bullet-point summaries that capture the essential information.
## Model Description
**Qwen2.5-7B-blog-key-points** is a 7B parameter model fine-tuned specifically for the task of extracting key points from articles. It can process a full article and generate a concise, bullet-point summary highlighting the most important information. Compared to the 3B version, this model offers enhanced capabilities for understanding complex articles and generating more nuanced summaries.
### Model Details
- **Model Type:** Qwen2.5 (7B parameters)
- **Base Model:** [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
- **Training Dataset:** [ncls-p/blog-key-points](https://huggingface.co/datasets/ncls-p/blog-key-points)
- **Language:** English
- **License:** [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)
- **Finetuning Approach:** Instruction fine-tuning on article-summary pairs
## Uses
### Direct Use
This model is designed for extracting key points from articles. You can use it directly for:
- Summarizing blog posts
- Extracting important information from news articles
- Creating bullet-point summaries of long-form content
- Generating concise overviews of research papers
- Distilling complex information into digestible points
### Example Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "ncls-p/Qwen2.5-7B-blog-key-points"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
article = """
[Your article text here]
"""
prompt = f"""
Extract the key points from the following article:
{article}
"""
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=1024)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Training
The model was fine-tuned on the [blog-key-points dataset](https://huggingface.co/datasets/ncls-p/blog-key-points), which contains 200 article-summary pairs. Each pair consists of a full article and a bullet-point summary of key points extracted using AI.
### Training Procedure
- **Fine-tuning Framework:** [Unsloth](https://github.com/unslothai/unsloth)
- **Training Data Format:**
```json
{
"instruction": "",
"input": "Full article content",
"output": "Here are the key points of the article:\n* Key point 1\n* Key point 2\n* Key point 3\n..."
}
```
## Evaluation
The model was evaluated on its ability to extract relevant key points from articles not seen during training. Evaluation metrics focused on:
1. **Relevance:** How well the extracted points capture the main ideas of the article
2. **Conciseness:** The ability to summarize information in a clear, bullet-point format
3. **Completeness:** Whether all important information is captured in the summary
4. **Coherence:** The logical flow and organization of the extracted points
## Limitations and Biases
- The model may inherit biases present in the training data, including potential biases in the source articles or in the key point extraction process.
- Performance may vary depending on the length, complexity, and domain of the input article.
- The model is primarily trained on English-language content and may not perform well on content in other languages.
- As with any summarization model, there is a risk of omitting important information or misrepresenting the original content.
- While the 7B parameter size offers improved capabilities over the 3B version, it also requires more computational resources to run.
## How to Cite
If you use this model in your research, please cite:
```bibtex
@misc{qwen25-7b-blog-key-points,
author = {ncls-p},
title = {Qwen2.5-7B-blog-key-points},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face model repository},
howpublished = {\url{https://huggingface.co/ncls-p/Qwen2.5-7B-blog-key-points}},
}
```
## Dataset Creation
The dataset used to train this model was created using the [llm-to-blog-key-points-dataset](https://github.com/ncls-p/llm-to-blog-key-points-dataset), a CLI tool that extracts key points from web articles AI and adds them to a dataset in a structured format. |
CockfieldC99288/cfbcvbc | CockfieldC99288 | 2025-04-28T08:00:17Z | 0 | 0 | null | [
"license:bsd-3-clause-clear",
"region:us"
] | null | 2025-04-28T08:00:17Z | ---
license: bsd-3-clause-clear
---
|
MerantixMomentum/acip_llama31_8b | MerantixMomentum | 2025-04-28T07:59:13Z | 25 | 1 | transformers | [
"transformers",
"safetensors",
"acip_model",
"feature-extraction",
"acip",
"pytorch",
"text-generation",
"custom_code",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"dataset:allenai/c4",
"arxiv:2502.01717",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"region:us"
] | text-generation | 2025-04-15T15:39:37Z | ---
license: llama3.1
datasets: ['allenai/c4']
language: ['en', 'de', 'fr', 'it', 'pt', 'hi', 'es', 'th']
metrics: ['perplexity', 'accuracy']
tags: ['acip', 'pytorch']
base_model:
- meta-llama/Llama-3.1-8B
pipeline_tag: text-generation
library_name: transformers
---
<div align="center">
<img width="30%" alt="logo" src="https://imgur.com/A0MCHPq.png">
</div>
<div align="center">
<a href="https://github.com/merantix-momentum/acip"><img src="https://img.shields.io/badge/GitHub-%23121011.svg?logo=github&logoColor=white.svg" alt="github" style="display: inline-block; vertical-align: middle;"></a>
<a href="https://arxiv.org/abs/2502.01717"><img src="https://img.shields.io/badge/arXiv-2502.01717-b31b1b.svg" alt="arxiv" style="display: inline-block; vertical-align: middle;"></a>
<a href="https://acip.merantix-momentum.com/"><img alt="website" src="https://img.shields.io/website/https/acip.merantix-momentum.com.svg?down_color=red&down_message=offline&up_message=online" style="display: inline-block; vertical-align: middle;"></a>
</div>
<h2 align="center">
<p> [
<a href="https://github.com/merantix-momentum/acip">🤖 GitHub</a> |
<a href="https://arxiv.org/abs/2502.01717">📄 Paper</a> |
<a href="https://acip.merantix-momentum.com/">🌐 Website</a>
]
</p>
</h2>
<h1 align="center">
<p>ACIP applied to meta-llama/Llama-3.1-8B</p>
</h1>
This model repository is part of the ACIP Project and provides a compressible version of [`meta-llama/Llama-3.1-8B`](https://huggingface.co/meta-llama/Llama-3.1-8B). For more details, please visit our [code repo](https://github.com/merantix-momentum/acip).
# Quick Start
Just load the ACIP model via `from_pretrained`:
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("MerantixMomentum/acip_llama31_8b", trust_remote_code=True)
```
This will download and create a fully parameterized ACIP model that can be pruned to any compression rate you wish.
For example,
```python
model.prune_model_by_score(size_ratio=0.4)
```
will prune `model` to 40% if its original size measured in number of parameters, i.e., 60% compression rate.
A unique feature of ACIP is that this operation is revertible in the sense that you can rerun `model.prune_model_by_score` as often as you like to evaluate your model at different sizes. Finally, you can "commit" to a certain ratio and run
```python
model.compress()
```
which will discard all pruned mask values of compressible linear layers.
Now the model is actually compressed and you should observe a significant decrease of memory usage (this step is not revertible without reloading the ACIP model).
If you like, you can also run
```python
model.quantize()
```
to save even more memory (we have only tested 4bit quantization with `bitsandbytes`, but you could also customize this).
**🚀 That's it! You can now use your compressed model for inference or fine-tuning as any other Causal Language Model from 🤗 transformers.**
**Note**: The parameter `size_ratio` ranges from 1.0 to 0.0, indicating the model size after compression. For example, 0.4 means that the model has only 40% of the original number of parameters and 1.0 means no compression at all. Alternatively, you can also set `compression_rate` in `prune_model_by_score`, which is equivalent to `size_ratio = 1.0 - compression_rate`.
# Dependencies
To run an ACIP model from our hub, you only need minimal dependencies, namely `torch`, `transformers`, `peft`, and optionally, `bitsandbytes` in case you want to quantize your model.
See [requirements.txt](requirements.txt) for pip-installable dependencies with exact version pins (newer version should work as well).
# License
This model is released under the llama3.1 license.
# Citation
When using or referring to this model, please cite our [paper](https://arxiv.org/abs/2502.01717):
```bibtex
@article{mxm2025acip,
title={Choose Your Model Size: Any Compression by a Single Gradient Descent},
author={M. Genzel, P. Putzky, P. Zhao, S. Schulze, M. Mollenhauer, R. Seidel, S. Dietzel, T. Wollmann},
year={2025},
journal={Preprint arXiv:2502.01717}
}
```
|
MerantixMomentum/acip_llama2_13b | MerantixMomentum | 2025-04-28T07:56:39Z | 27 | 1 | transformers | [
"transformers",
"safetensors",
"acip_model",
"feature-extraction",
"acip",
"pytorch",
"text-generation",
"custom_code",
"en",
"dataset:allenai/c4",
"arxiv:2502.01717",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:finetune:meta-llama/Llama-2-13b-hf",
"license:llama2",
"region:us"
] | text-generation | 2025-04-15T15:32:14Z | ---
license: llama2
datasets: ['allenai/c4']
language: ['en']
metrics: ['perplexity', 'accuracy']
tags: ['acip', 'pytorch']
base_model:
- meta-llama/Llama-2-13b-hf
pipeline_tag: text-generation
library_name: transformers
---
<div align="center">
<img width="30%" alt="logo" src="https://imgur.com/A0MCHPq.png">
</div>
<div align="center">
<a href="https://github.com/merantix-momentum/acip"><img src="https://img.shields.io/badge/GitHub-%23121011.svg?logo=github&logoColor=white.svg" alt="github" style="display: inline-block; vertical-align: middle;"></a>
<a href="https://arxiv.org/abs/2502.01717"><img src="https://img.shields.io/badge/arXiv-2502.01717-b31b1b.svg" alt="arxiv" style="display: inline-block; vertical-align: middle;"></a>
<a href="https://acip.merantix-momentum.com/"><img alt="website" src="https://img.shields.io/website/https/acip.merantix-momentum.com.svg?down_color=red&down_message=offline&up_message=online" style="display: inline-block; vertical-align: middle;"></a>
</div>
<h2 align="center">
<p> [
<a href="https://github.com/merantix-momentum/acip">🤖 GitHub</a> |
<a href="https://arxiv.org/abs/2502.01717">📄 Paper</a> |
<a href="https://acip.merantix-momentum.com/">🌐 Website</a>
]
</p>
</h2>
<h1 align="center">
<p>ACIP applied to meta-llama/Llama-2-13b-hf</p>
</h1>
This model repository is part of the ACIP Project and provides a compressible version of [`meta-llama/Llama-2-13b-hf`](https://huggingface.co/meta-llama/Llama-2-13b-hf). For more details, please visit our [code repo](https://github.com/merantix-momentum/acip).
# Quick Start
Just load the ACIP model via `from_pretrained`:
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("MerantixMomentum/acip_llama2_13b", trust_remote_code=True)
```
This will download and create a fully parameterized ACIP model that can be pruned to any compression rate you wish.
For example,
```python
model.prune_model_by_score(size_ratio=0.4)
```
will prune `model` to 40% if its original size measured in number of parameters, i.e., 60% compression rate.
A unique feature of ACIP is that this operation is revertible in the sense that you can rerun `model.prune_model_by_score` as often as you like to evaluate your model at different sizes. Finally, you can "commit" to a certain ratio and run
```python
model.compress()
```
which will discard all pruned mask values of compressible linear layers.
Now the model is actually compressed and you should observe a significant decrease of memory usage (this step is not revertible without reloading the ACIP model).
If you like, you can also run
```python
model.quantize()
```
to save even more memory (we have only tested 4bit quantization with `bitsandbytes`, but you could also customize this).
**🚀 That's it! You can now use your compressed model for inference or fine-tuning as any other Causal Language Model from 🤗 transformers.**
**Note**: The parameter `size_ratio` ranges from 1.0 to 0.0, indicating the model size after compression. For example, 0.4 means that the model has only 40% of the original number of parameters and 1.0 means no compression at all. Alternatively, you can also set `compression_rate` in `prune_model_by_score`, which is equivalent to `size_ratio = 1.0 - compression_rate`.
# Dependencies
To run an ACIP model from our hub, you only need minimal dependencies, namely `torch`, `transformers`, `peft`, and optionally, `bitsandbytes` in case you want to quantize your model.
See [requirements.txt](requirements.txt) for pip-installable dependencies with exact version pins (newer version should work as well).
# License
This model is released under the llama2 license.
# Citation
When using or referring to this model, please cite our [paper](https://arxiv.org/abs/2502.01717):
```bibtex
@article{mxm2025acip,
title={Choose Your Model Size: Any Compression by a Single Gradient Descent},
author={M. Genzel, P. Putzky, P. Zhao, S. Schulze, M. Mollenhauer, R. Seidel, S. Dietzel, T. Wollmann},
year={2025},
journal={Preprint arXiv:2502.01717}
}
```
|
kaitlynwortman8/kaitlynwortman8 | kaitlynwortman8 | 2025-04-28T07:56:28Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-04-28T07:56:28Z | ---
license: bigscience-openrail-m
---
|
bullerwins/DeepSeek-R1T-Chimera-bf16 | bullerwins | 2025-04-28T07:54:28Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"deepseek_v3",
"text-generation",
"conversational",
"custom_code",
"base_model:tngtech/DeepSeek-R1T-Chimera",
"base_model:quantized:tngtech/DeepSeek-R1T-Chimera",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"fp8",
"region:us"
] | text-generation | 2025-04-27T16:55:36Z | ---
license: mit
library_name: transformers
base_model:
- tngtech/DeepSeek-R1T-Chimera
pipeline_tag: text-generation
---
This is the BF16 converted model from FP8 original weights so it can be quantized to GGUF.
# DeepSeek-R1T-Chimera
<div align="center">
<img src="https://www.tngtech.com/_astro/TNG_Logo.URm66zYr_Z2aCrIU.svg"
alt="TNG Logo"
width="400"
style="display: inline-block; vertical-align: middle;"/>
</div>
<br>
<div align="center">
<a href="LICENSE" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
**Model merge of DeepSeek-R1 and DeepSeek-V3 (0324)**
An open weights model combining the intelligence of R1 with the token efficiency of V3.
## Model Details
- **Architecture**: DeepSeek-MoE Transformer-based language model
- **Combination Method**: Merged model weights from DeepSeek-R1 and DeepSeek-V3 (0324)
- **Release Date**: 2025-04-27
## Contact
- Email: [email protected] |
bullerwins/QwQ-32B-Preview-exl2_5.5bpw | bullerwins | 2025-04-28T07:53:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-12-03T09:55:45Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/QwQ-32B-Preview/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model: Qwen/Qwen2.5-32B-Instruct
tags:
- chat
library_name: transformers
---
# QwQ-32B-Preview
## Introduction
**QwQ-32B-Preview** is an experimental research model developed by the Qwen Team, focused on advancing AI reasoning capabilities. As a preview release, it demonstrates promising analytical abilities while having several important limitations:
1. **Language Mixing and Code-Switching**: The model may mix languages or switch between them unexpectedly, affecting response clarity.
2. **Recursive Reasoning Loops**: The model may enter circular reasoning patterns, leading to lengthy responses without a conclusive answer.
3. **Safety and Ethical Considerations**: The model requires enhanced safety measures to ensure reliable and secure performance, and users should exercise caution when deploying it.
4. **Performance and Benchmark Limitations**: The model excels in math and coding but has room for improvement in other areas, such as common sense reasoning and nuanced language understanding.
**Specification**:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 32.5B
- Number of Paramaters (Non-Embedding): 31.0B
- Number of Layers: 64
- Number of Attention Heads (GQA): 40 for Q and 8 for KV
- Context Length: Full 32,768 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwq-32b-preview/). You can also check Qwen2.5 [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/QwQ-32B-Preview"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many r in strawberry."
messages = [
{"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwq-32b-preview,
title = {QwQ: Reflect Deeply on the Boundaries of the Unknown},
url = {https://qwenlm.github.io/blog/qwq-32b-preview/},
author = {Qwen Team},
month = {November},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
bullerwins/QwQ-32B-Preview-exl2_4.5bpw | bullerwins | 2025-04-28T07:53:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-12-03T09:53:48Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/QwQ-32B-Preview/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model: Qwen/Qwen2.5-32B-Instruct
tags:
- chat
library_name: transformers
---
# QwQ-32B-Preview
## Introduction
**QwQ-32B-Preview** is an experimental research model developed by the Qwen Team, focused on advancing AI reasoning capabilities. As a preview release, it demonstrates promising analytical abilities while having several important limitations:
1. **Language Mixing and Code-Switching**: The model may mix languages or switch between them unexpectedly, affecting response clarity.
2. **Recursive Reasoning Loops**: The model may enter circular reasoning patterns, leading to lengthy responses without a conclusive answer.
3. **Safety and Ethical Considerations**: The model requires enhanced safety measures to ensure reliable and secure performance, and users should exercise caution when deploying it.
4. **Performance and Benchmark Limitations**: The model excels in math and coding but has room for improvement in other areas, such as common sense reasoning and nuanced language understanding.
**Specification**:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 32.5B
- Number of Paramaters (Non-Embedding): 31.0B
- Number of Layers: 64
- Number of Attention Heads (GQA): 40 for Q and 8 for KV
- Context Length: Full 32,768 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwq-32b-preview/). You can also check Qwen2.5 [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/QwQ-32B-Preview"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many r in strawberry."
messages = [
{"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwq-32b-preview,
title = {QwQ: Reflect Deeply on the Boundaries of the Unknown},
url = {https://qwenlm.github.io/blog/qwq-32b-preview/},
author = {Qwen Team},
month = {November},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
bullerwins/Athene-V2-Chat-exl2_4.0bpw | bullerwins | 2025-04-28T07:52:56Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"RLHF",
"Nexusflow",
"Athene",
"Chat Model",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-72B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-72B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"exl2",
"region:us"
] | text-generation | 2024-11-16T07:42:30Z | ---
license: other
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
tags:
- RLHF
- Nexusflow
- Athene
- Chat Model
base_model:
- Qwen/Qwen2.5-72B-Instruct
---
# Athene-V2-Chat-72B: Rivaling GPT-4o across Benchmarks
<p align="center">
<a href="https://huggingface.co/Nexusflow" target="_blank">Nexusflow HF</a> - <a href="https://discord.gg/HDSVmNAs3y" target="_blank">Nexusflow Discord</a> - <a href="https://nexusflow.ai/blogs/athene-v2" target="_blank">Athene-V2 Blogpost</a>
</p>
We introduce Athene-V2-Chat-72B, an open-weights LLM on-par with GPT-4o across benchmarks. It is trained through RLHF with Qwen-2.5-72B-Instruct as base model.
Athene-V2-Chat-72B excels in chat, math, and coding. Its sister model, [Athene-V2-Agent-72B](https://huggingface.co/Nexusflow/Athene-V2-Agent), surpasses GPT-4o in complex function calling and agentic applications.
<p align="center" width="100%">
<a><img src="benchmark.png" alt="Benchmark" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
- **Developed by:** The Nexusflow Team
- **Model type:** Chat Model
- **Finetuned from model:** [Qwen 2.5 72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)
- **License**: [Nexusflow Research License](https://huggingface.co/Nexusflow/Athene-V2-Chat/blob/main/Nexusflow_Research_License_.pdf)
- **Blog**: https://nexusflow.ai/blogs/athene-v2
## Usage
Athene-V2-Chat uses the same chat template as Qwen2.5-72B-Instruct. Below is an example simple usage using the Transformers library.
```Python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Nexusflow/Athene-V2-Chat"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Write a Python function to return the nth Fibonacci number in log n runtime."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=2048
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
Note that by adding a system prompt that encourages the model to think step by step, the model can improve further on difficult math queries and problems like counting `r`s in strawberry. For fairness consideration we **do not** include such system prompt during chat evaluation.
## Acknowledgment
We would like to thank the [LMSYS Organization](https://lmsys.org/) for their support of testing the model. We would like to thank Qwen Team and the open source community for their efforts in providing the datasets and base models. |
bullerwins/EVA-Qwen2.5-72B-v0.2-exl2_3.0bpw | bullerwins | 2025-04-28T07:52:45Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal",
"dataset:Nopm/Opus_WritingStruct",
"dataset:Gryphe/Sonnet3.5-SlimOrcaDedupCleaned",
"dataset:Gryphe/Sonnet3.5-Charcard-Roleplay",
"dataset:Gryphe/ChatGPT-4o-Writing-Prompts",
"dataset:Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned",
"dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned",
"dataset:nothingiisreal/Reddit-Dirty-And-WritingPrompts",
"dataset:allura-org/Celeste-1.x-data-mixture",
"dataset:cognitivecomputations/dolphin-2.9.3",
"base_model:Qwen/Qwen2.5-72B",
"base_model:quantized:Qwen/Qwen2.5-72B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"exl2",
"region:us"
] | text-generation | 2025-01-01T21:01:06Z | ---
license: other
library_name: transformers
tags:
- generated_from_trainer
base_model: Qwen/Qwen2.5-72B
datasets:
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- Nopm/Opus_WritingStruct
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- Gryphe/Sonnet3.5-Charcard-Roleplay
- Gryphe/ChatGPT-4o-Writing-Prompts
- Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
- Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
- nothingiisreal/Reddit-Dirty-And-WritingPrompts
- allura-org/Celeste-1.x-data-mixture
- cognitivecomputations/dolphin-2.9.3
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
model-index:
- name: EVA-Qwen2.5-72B-SFFT-v0.2
results: []
---
# EVA Qwen2.5-72B v0.2
<p>
A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-72B on mixture of synthetic and natural data.<br>
It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.<br>
</p>
<p>Dedicated to Nev.</p>
<p><b>NOTE: LLM-Compressor quants don't seem to work correctly, quality seems to be much worse than normal. It wasn't the case with previous versions. GGUF and GPTQ seem to be unaffected.</b></p>
</br>
<p><b>Version notes for 0.2</b>: Optimized training hyperparameters and increased sequence length. Better instruction following deeper into context and less repetition.</p>
<p>
<p>Prompt format is ChatML.</p><br>
<h3>Recommended sampler values:</h3>
<ul>
<li>Temperature: 0.8</li>
<li>Min-P: 0.05</li>
<li>Top-A: 0.3</li>
<li>Repetition Penalty: 1.03</li>
</ul>
<h3>Recommended SillyTavern preset (via CalamitousFelicitousness):</h3>
<ul><li><a href="https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2/blob/main/EV01.json">Master import</a></li></ul>
</p>
<p>
<br>
<h3>
Training data:
</h3>
<ul>
<li>Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's <a href=https://huggingface.co/nothingiisreal/L3.1-70B-Celeste-V0.1-BF16>card</a> for details.</li>
<li>Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.</li>
<li>A subset (1k rows) of ChatGPT-4o-WritingPrompts by Gryphe</li>
<li>A subset (2k rows) of Sonnet3.5-Charcards-Roleplay by Gryphe</li>
<li>Synthstruct and SynthRP datasets by Epiculous</li>
<li>A subset from Dolphin-2.9.3, including filtered version of not_samantha and a small subset of systemchat.</li>
</ul>
<h3>
Training time and hardware:
</h3>
<ul><li>17 hours on 8xH100 SXM</a></li></ul><br>
</p>
<p>Model was created by Kearm, Auri and Cahvay.</p>
<h4>Special thanks:</h4><ul>
<li>to Featherless for sponsoring this run</li>
<li>to Cahvay for his work on investigating and reprocessing the corrupted dataset, removing the single biggest source of data poisoning.</li>
<li>to Gryphe, Lemmy, Kalomaze, Nopm, Epiculous and CognitiveComputations for the data</li>
<li>and to Allura-org for support, feedback, beta-testing and doing quality control of EVA models.</li></ul>
<h3>Statement about change in licensing for the future models.</h3>
<p>For all future EVA-Unit-01 models, there will be a provision in the license stating that Infermatic and any of its employees or paid associates cannot utilize, distribute, download, or otherwise make use of EVA models.
While this cannot retroactively apply to our licensing, we officially request Infermatic immediately cease use of our models for unwarranted profit, although we acknowledge at this point it will not likely be followed.
EVA models will still be available in the future on Featherless, ArliAI (in the future), and other providers who want to host them, as well as for local and cloud usage.</p>
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: Qwen/Qwen2.5-72B
load_in_8bit: false
load_in_4bit: false
strict: false
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true
# plugins:
# - axolotl.integrations.spectrum.SpectrumPlugin
# spectrum_top_fraction: 0.5
# # Optional if using a pre-scanned model as your base_model. Useful if using a model mirror
# spectrum_model_name: Qwen/Qwen2.5-32B
datasets:
- path: datasets/Celeste_Filtered_utf8fix.jsonl
type: sharegpt
- path: datasets/deduped_not_samantha_norefusals.jsonl
type: sharegpt
- path: datasets/deduped_SynthRP-Gens_processed_ShareGPT_converted_cleaned.jsonl
type: sharegpt
- path: datasets/deduped_Synthstruct-Gens_processed_sharegpt_converted_cleaned.jsonl
type: sharegpt
- path: datasets/Gryphe-4o-WP-filtered-sharegpt_utf8fix.jsonl
type: sharegpt
- path: datasets/opus-instruct-22k-no_refusals-filtered_utf8fix.jsonl
type: sharegpt
- path: datasets/Sonnet3-5-charcard-names-filtered-sharegpt_utf8fix.jsonl
type: sharegpt
- path: datasets/SystemChat_subset_filtered_sharegpt_utf8fix.jsonl
type: sharegpt
chat_template: chatml
shuffle_merged_datasets: true
val_set_size: 0.001
output_dir: EVA-Qwen2.5-72B-SFFT-v0.2
sequence_len: 10240
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: false
# adapter: qlora
# lora_model_dir:
# lora_r: 64
# lora_alpha: 128
# lora_dropout: 0.05
# lora_target_linear: true
# peft_use_dora: true
unfrozen_parameters:
- ^lm_head.weight$
- ^model.embed_tokens.weight$
# mlp.down_proj layers
- model.layers.62.mlp.down_proj
- model.layers.64.mlp.down_proj
- model.layers.63.mlp.down_proj
- model.layers.66.mlp.down_proj
- model.layers.65.mlp.down_proj
- model.layers.67.mlp.down_proj
- model.layers.68.mlp.down_proj
- model.layers.31.mlp.down_proj
- model.layers.60.mlp.down_proj
- model.layers.69.mlp.down_proj
- model.layers.61.mlp.down_proj
- model.layers.59.mlp.down_proj
- model.layers.30.mlp.down_proj
- model.layers.70.mlp.down_proj
- model.layers.32.mlp.down_proj
- model.layers.34.mlp.down_proj
- model.layers.33.mlp.down_proj
- model.layers.76.mlp.down_proj
- model.layers.72.mlp.down_proj
- model.layers.71.mlp.down_proj
- model.layers.58.mlp.down_proj
- model.layers.75.mlp.down_proj
- model.layers.29.mlp.down_proj
- model.layers.56.mlp.down_proj
- model.layers.26.mlp.down_proj
- model.layers.35.mlp.down_proj
- model.layers.28.mlp.down_proj
- model.layers.57.mlp.down_proj
- model.layers.77.mlp.down_proj
- model.layers.36.mlp.down_proj
- model.layers.27.mlp.down_proj
- model.layers.25.mlp.down_proj
- model.layers.78.mlp.down_proj
- model.layers.37.mlp.down_proj
- model.layers.73.mlp.down_proj
- model.layers.55.mlp.down_proj
- model.layers.54.mlp.down_proj
- model.layers.74.mlp.down_proj
- model.layers.24.mlp.down_proj
- model.layers.53.mlp.down_proj
# mlp.gate_proj layers
- model.layers.78.mlp.gate_proj
- model.layers.77.mlp.gate_proj
- model.layers.76.mlp.gate_proj
- model.layers.79.mlp.gate_proj
- model.layers.75.mlp.gate_proj
- model.layers.74.mlp.gate_proj
- model.layers.73.mlp.gate_proj
- model.layers.72.mlp.gate_proj
- model.layers.71.mlp.gate_proj
- model.layers.70.mlp.gate_proj
- model.layers.69.mlp.gate_proj
- model.layers.57.mlp.gate_proj
- model.layers.54.mlp.gate_proj
- model.layers.55.mlp.gate_proj
- model.layers.68.mlp.gate_proj
- model.layers.63.mlp.gate_proj
- model.layers.53.mlp.gate_proj
- model.layers.44.mlp.gate_proj
- model.layers.45.mlp.gate_proj
- model.layers.49.mlp.gate_proj
- model.layers.58.mlp.gate_proj
- model.layers.46.mlp.gate_proj
- model.layers.56.mlp.gate_proj
- model.layers.67.mlp.gate_proj
- model.layers.62.mlp.gate_proj
- model.layers.50.mlp.gate_proj
- model.layers.64.mlp.gate_proj
- model.layers.52.mlp.gate_proj
- model.layers.40.mlp.gate_proj
- model.layers.43.mlp.gate_proj
- model.layers.48.mlp.gate_proj
- model.layers.66.mlp.gate_proj
- model.layers.47.mlp.gate_proj
- model.layers.59.mlp.gate_proj
- model.layers.65.mlp.gate_proj
- model.layers.61.mlp.gate_proj
- model.layers.60.mlp.gate_proj
- model.layers.42.mlp.gate_proj
- model.layers.51.mlp.gate_proj
- model.layers.41.mlp.gate_proj
# mlp.up_proj layers
- model.layers.70.mlp.up_proj
- model.layers.69.mlp.up_proj
- model.layers.71.mlp.up_proj
- model.layers.68.mlp.up_proj
- model.layers.72.mlp.up_proj
- model.layers.67.mlp.up_proj
- model.layers.66.mlp.up_proj
- model.layers.73.mlp.up_proj
- model.layers.46.mlp.up_proj
- model.layers.63.mlp.up_proj
- model.layers.75.mlp.up_proj
- model.layers.76.mlp.up_proj
- model.layers.74.mlp.up_proj
- model.layers.45.mlp.up_proj
- model.layers.62.mlp.up_proj
- model.layers.64.mlp.up_proj
- model.layers.65.mlp.up_proj
- model.layers.44.mlp.up_proj
- model.layers.53.mlp.up_proj
- model.layers.47.mlp.up_proj
- model.layers.49.mlp.up_proj
- model.layers.48.mlp.up_proj
- model.layers.57.mlp.up_proj
- model.layers.43.mlp.up_proj
- model.layers.42.mlp.up_proj
- model.layers.56.mlp.up_proj
- model.layers.61.mlp.up_proj
- model.layers.54.mlp.up_proj
- model.layers.40.mlp.up_proj
- model.layers.55.mlp.up_proj
- model.layers.77.mlp.up_proj
- model.layers.60.mlp.up_proj
- model.layers.41.mlp.up_proj
- model.layers.35.mlp.up_proj
- model.layers.37.mlp.up_proj
- model.layers.58.mlp.up_proj
- model.layers.34.mlp.up_proj
- model.layers.38.mlp.up_proj
- model.layers.33.mlp.up_proj
- model.layers.39.mlp.up_proj
# self_attn.k_proj layers
- model.layers.36.self_attn.k_proj
- model.layers.79.self_attn.k_proj
- model.layers.35.self_attn.k_proj
- model.layers.34.self_attn.k_proj
- model.layers.37.self_attn.k_proj
- model.layers.33.self_attn.k_proj
- model.layers.38.self_attn.k_proj
- model.layers.39.self_attn.k_proj
- model.layers.74.self_attn.k_proj
- model.layers.77.self_attn.k_proj
- model.layers.41.self_attn.k_proj
- model.layers.69.self_attn.k_proj
- model.layers.32.self_attn.k_proj
- model.layers.78.self_attn.k_proj
- model.layers.30.self_attn.k_proj
- model.layers.70.self_attn.k_proj
- model.layers.25.self_attn.k_proj
- model.layers.42.self_attn.k_proj
- model.layers.29.self_attn.k_proj
- model.layers.31.self_attn.k_proj
- model.layers.68.self_attn.k_proj
- model.layers.66.self_attn.k_proj
- model.layers.22.self_attn.k_proj
- model.layers.65.self_attn.k_proj
- model.layers.44.self_attn.k_proj
- model.layers.40.self_attn.k_proj
- model.layers.63.self_attn.k_proj
- model.layers.23.self_attn.k_proj
- model.layers.28.self_attn.k_proj
- model.layers.24.self_attn.k_proj
- model.layers.26.self_attn.k_proj
- model.layers.67.self_attn.k_proj
- model.layers.75.self_attn.k_proj
- model.layers.27.self_attn.k_proj
- model.layers.57.self_attn.k_proj
- model.layers.64.self_attn.k_proj
- model.layers.71.self_attn.k_proj
- model.layers.61.self_attn.k_proj
- model.layers.72.self_attn.k_proj
- model.layers.73.self_attn.k_proj
# self_attn.o_proj layers
- model.layers.69.self_attn.o_proj
- model.layers.39.self_attn.o_proj
- model.layers.16.self_attn.o_proj
- model.layers.14.self_attn.o_proj
- model.layers.19.self_attn.o_proj
- model.layers.42.self_attn.o_proj
- model.layers.12.self_attn.o_proj
- model.layers.15.self_attn.o_proj
- model.layers.17.self_attn.o_proj
- model.layers.38.self_attn.o_proj
- model.layers.23.self_attn.o_proj
- model.layers.22.self_attn.o_proj
- model.layers.13.self_attn.o_proj
- model.layers.29.self_attn.o_proj
- model.layers.41.self_attn.o_proj
- model.layers.44.self_attn.o_proj
- model.layers.46.self_attn.o_proj
- model.layers.45.self_attn.o_proj
- model.layers.43.self_attn.o_proj
- model.layers.49.self_attn.o_proj
- model.layers.30.self_attn.o_proj
- model.layers.26.self_attn.o_proj
- model.layers.25.self_attn.o_proj
- model.layers.37.self_attn.o_proj
- model.layers.47.self_attn.o_proj
- model.layers.11.self_attn.o_proj
- model.layers.18.self_attn.o_proj
- model.layers.28.self_attn.o_proj
- model.layers.20.self_attn.o_proj
- model.layers.27.self_attn.o_proj
- model.layers.53.self_attn.o_proj
- model.layers.52.self_attn.o_proj
- model.layers.35.self_attn.o_proj
- model.layers.71.self_attn.o_proj
- model.layers.10.self_attn.o_proj
- model.layers.3.self_attn.o_proj
- model.layers.21.self_attn.o_proj
- model.layers.24.self_attn.o_proj
- model.layers.68.self_attn.o_proj
- model.layers.48.self_attn.o_proj
# self_attn.q_proj layers
- model.layers.1.self_attn.q_proj
- model.layers.2.self_attn.q_proj
- model.layers.3.self_attn.q_proj
- model.layers.0.self_attn.q_proj
- model.layers.5.self_attn.q_proj
- model.layers.4.self_attn.q_proj
- model.layers.6.self_attn.q_proj
- model.layers.8.self_attn.q_proj
- model.layers.7.self_attn.q_proj
- model.layers.9.self_attn.q_proj
- model.layers.10.self_attn.q_proj
- model.layers.68.self_attn.q_proj
- model.layers.25.self_attn.q_proj
- model.layers.12.self_attn.q_proj
- model.layers.54.self_attn.q_proj
- model.layers.55.self_attn.q_proj
- model.layers.61.self_attn.q_proj
- model.layers.18.self_attn.q_proj
- model.layers.49.self_attn.q_proj
- model.layers.66.self_attn.q_proj
- model.layers.72.self_attn.q_proj
- model.layers.11.self_attn.q_proj
- model.layers.52.self_attn.q_proj
- model.layers.64.self_attn.q_proj
- model.layers.15.self_attn.q_proj
- model.layers.60.self_attn.q_proj
- model.layers.50.self_attn.q_proj
- model.layers.59.self_attn.q_proj
- model.layers.53.self_attn.q_proj
- model.layers.48.self_attn.q_proj
- model.layers.57.self_attn.q_proj
- model.layers.70.self_attn.q_proj
- model.layers.17.self_attn.q_proj
- model.layers.67.self_attn.q_proj
- model.layers.71.self_attn.q_proj
- model.layers.62.self_attn.q_proj
- model.layers.51.self_attn.q_proj
- model.layers.19.self_attn.q_proj
- model.layers.58.self_attn.q_proj
- model.layers.13.self_attn.q_proj
# self_attn.v_proj layers
- model.layers.23.self_attn.v_proj
- model.layers.25.self_attn.v_proj
- model.layers.26.self_attn.v_proj
- model.layers.27.self_attn.v_proj
- model.layers.28.self_attn.v_proj
- model.layers.29.self_attn.v_proj
- model.layers.30.self_attn.v_proj
- model.layers.31.self_attn.v_proj
- model.layers.34.self_attn.v_proj
- model.layers.35.self_attn.v_proj
- model.layers.36.self_attn.v_proj
- model.layers.37.self_attn.v_proj
- model.layers.38.self_attn.v_proj
- model.layers.42.self_attn.v_proj
- model.layers.48.self_attn.v_proj
- model.layers.57.self_attn.v_proj
- model.layers.58.self_attn.v_proj
- model.layers.61.self_attn.v_proj
- model.layers.63.self_attn.v_proj
- model.layers.64.self_attn.v_proj
- model.layers.65.self_attn.v_proj
- model.layers.66.self_attn.v_proj
- model.layers.69.self_attn.v_proj
- model.layers.70.self_attn.v_proj
- model.layers.74.self_attn.v_proj
- model.layers.75.self_attn.v_proj
- model.layers.72.self_attn.v_proj
- model.layers.39.self_attn.v_proj
- model.layers.41.self_attn.v_proj
- model.layers.40.self_attn.v_proj
- model.layers.33.self_attn.v_proj
- model.layers.59.self_attn.v_proj
- model.layers.16.self_attn.v_proj
- model.layers.15.self_attn.v_proj
- model.layers.76.self_attn.v_proj
- model.layers.24.self_attn.v_proj
- model.layers.68.self_attn.v_proj
- model.layers.67.self_attn.v_proj
- model.layers.55.self_attn.v_proj
- model.layers.44.self_attn.v_proj
wandb_project: EVA-Qwen2.5-72B-SFFT-v0.2
wandb_entity:
wandb_watch:
wandb_name: Unit-02
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 3
optimizer: paged_ademamix_8bit
lr_scheduler: cosine
learning_rate: 0.00003
max_grad_norm: 1.5
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: "unsloth"
# gradient_checkpointing_kwargs:
# use_reentrant: true
early_stopping_patience:
resume_from_checkpoint: EVA-Qwen2.5-72B-SFFT-v0.2/checkpoint-128
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 20
evals_per_epoch: 4
saves_per_epoch: 4
save_safetensors: true
save_total_limit: 1
hub_model_id:
hub_strategy:
debug:
deepspeed: deepspeed_configs/zero3_bf16_cpuoffload_params.json
weight_decay: 0.12
# fsdp:
# - full_shard
# - auto_wrap
# fsdp_config:
# fsdp_limit_all_gathers: true
# fsdp_sync_module_states: false
# fsdp_offload_params: true
# fsdp_cpu_ram_efficient_loading: true
# fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
# fsdp_transformer_layer_cls_to_wrap: Qwen2DecoderLayer
# fsdp_activation_checkpointing: true
# fsdp_state_dict_type: SHARDED_STATE_DICT # Changed from FULL_STATE_DICT
# fsdp_sharding_strategy: FULL_SHARD
# fsdp_forward_prefetch: false # Added
# fsdp_backward_prefetch: "BACKWARD_PRE" # Added
# fsdp_backward_prefetch_limit: 1 # Added
# fsdp_mixed_precision: BF16 # Added
```
</details><br>
<h3><a href=https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard>Open LLM Leaderboard Evaluation Results</a></h3>
| Metric |Value|
|-------------------|----:|
|Avg. |43.54|
|IFEval (0-Shot) |68.79|
|BBH (3-Shot) |59.07|
|MATH Lvl 5 (4-Shot)|39.05|
|GPQA (0-shot) |21.14|
|MuSR (0-shot) |19.73|
|MMLU-PRO (5-shot) |53.48|
|
bullerwins/Qwen2.5-Coder-32B-exl2_8.0bpw | bullerwins | 2025-04-28T07:52:41Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2409.12186",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-32B",
"base_model:quantized:Qwen/Qwen2.5-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] | text-generation | 2024-11-12T12:54:41Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-32B/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-32B
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- qwen
- qwen-coder
- codeqwen
---
# Qwen2.5-Coder-32B
## Introduction
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
- **Long-context Support** up to 128K tokens.
**This repo contains the 32B Qwen2.5-Coder model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 32.5B
- Number of Paramaters (Non-Embedding): 31.0B
- Number of Layers: 64
- Number of Attention Heads (GQA): 40 for Q and 8 for KV
- Context Length: Full 131,072 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
## Requirements
The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{hui2024qwen2,
title={Qwen2. 5-Coder Technical Report},
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
journal={arXiv preprint arXiv:2409.12186},
year={2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
alpha-ai/qwen2.5-reason-thought-lite-GGUF | alpha-ai | 2025-04-28T07:50:23Z | 79 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"alphaaico",
"qwen",
"reasoning",
"thought",
"lite",
"GRPO",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:openai/gsm8k",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-09T10:53:18Z | ---
base_model:
- Qwen/Qwen2.5-3B-Instruct
tags:
- text-generation-inference
- transformers
- alphaaico
- qwen
- reasoning
- thought
- lite
- GRPO
license: apache-2.0
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
datasets:
- openai/gsm8k
---
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/669777597cb32718c20d97e9/4emWK_PB-RrifIbrCUjE8.png"
alt="Title card"
style="width: 500px;
height: auto;
object-position: center top;">
</div>
**Website - https://www.alphaai.biz**
# Uploaded Model
- **Developed by:** alphaaico
- **License:** apache-2.0
- **Finetuned from model:** Qwen/Qwen2.5-3B-Instruct
This model, **qwen2.5-reason-thought-lite**, is a fine-tuned version of Qwen1.5 designed to not only reason through problems but also introspect on the reasoning process itself before delivering the final response. Its unique selling proposition (USP) is that it generates both a detailed reasoning and an internal thought on why that reasoning was made, all before presenting the final answer.
## Overview
**qwen2.5-reason-thought-lite** has been finetuned using GRPO and advanced reward modelling techniques—including custom functions such as `sequence_format_reward_func`—to enforce a strict response structure and encourage deep reasoning. While we won't divulge all the details, these techniques ensure that the model generates responses in a precise sequence that includes both a detailed reasoning process and a subsequent internal reflection before providing the final answer.
## Model Details
- **Base Model:** Qwen/Qwen2.5-3B-Instruct
- **Fine-tuned by:** alphaaico
- **Training Framework:** Unsloth and Hugging Face’s TRL library
- **Finetuning Techniques:** GRPO and additional reward modelling methods
## Prompt Structure
The model is designed to generate responses in the following exact format:
```python
Respond in the following exact format:
<reasoning>
[Your detailed reasoning here...]
</reasoning>
<thought>
[Your internal thought process about the reasoning...]
</thought>
<answer>
[Your final answer here...]
</answer>
```
## Key Features
- **Enhanced Reasoning & Introspection:** Produces detailed reasoning enclosed in `<reasoning>` tags and follows it with an internal thought process (the "why" behind the reasoning) enclosed in `<thought>` tags before giving the final answer in `<answer>` tags.
- **Structured Output:** The response format is strictly enforced, making it easy to parse and integrate into downstream applications.
- **Optimized Inference:** Fine-tuned using Unsloth and TRL for faster and more efficient performance on consumer hardware.
- **Versatile Deployment:** Supports multiple quantization formats, including GGUF and 16-bit, to accommodate various hardware configurations.
## Quantization Levels Available
- q4_k_m
- q5_k_m
- q8_0
- 16 Bit (https://huggingface.co/alpha-ai/qwen2.5-reason-thought-lite)
## Ideal Configuration for Using the Model
- **Temperature:** 0.8
- **Top-p:** 0.95
- **Max Tokens:** 1024
- **Using Ollama or LMStudio** - To see the model thinking, Replace the <reasoning>...</reasoning> tokens with <think>...</think> tokens.
## Use Cases
**qwen1.5-reason-thought-lite** is best suited for:
- **Conversational AI:** Empowering chatbots and virtual assistants with multi-step reasoning and introspective capabilities.
- **AI Research:** Investigating advanced reasoning and decision-making processes.
- **Automated Decision Support:** Enhancing business intelligence, legal reasoning, and financial analysis systems with structured, step-by-step outputs.
- **Educational Tools:** Assisting students and professionals in structured learning and problem solving.
- **Creative Applications:** Generating reflective and detailed content for storytelling, content creation, and more.
## Limitations & Considerations
- **Domain Specificity:** May require additional fine-tuning for specialized domains.
- **Factual Accuracy:** Primarily focused on reasoning and introspection; not intended as a comprehensive factual knowledge base.
- **Inference Speed:** Enhanced reasoning capabilities may result in slightly longer inference times.
- **Potential Biases:** Output may reflect biases present in the training data.
## License
This model is released under the Apache-2.0 license.
## Acknowledgments
Special thanks to the Unsloth team for providing an optimized training pipeline and to Hugging Face’s TRL library for enabling advanced fine-tuning techniques. |
mradermacher/YOYO-O1-32B-V4-preview4-GGUF | mradermacher | 2025-04-28T07:49:28Z | 51 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:YOYO-AI/YOYO-O1-32B-V4-preview4",
"base_model:quantized:YOYO-AI/YOYO-O1-32B-V4-preview4",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-27T21:41:39Z | ---
base_model: YOYO-AI/YOYO-O1-32B-V4-preview4
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/YOYO-AI/YOYO-O1-32B-V4-preview4
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/YOYO-O1-32B-V4-preview4-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-32B-V4-preview4-GGUF/resolve/main/YOYO-O1-32B-V4-preview4.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-32B-V4-preview4-GGUF/resolve/main/YOYO-O1-32B-V4-preview4.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-32B-V4-preview4-GGUF/resolve/main/YOYO-O1-32B-V4-preview4.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-32B-V4-preview4-GGUF/resolve/main/YOYO-O1-32B-V4-preview4.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-32B-V4-preview4-GGUF/resolve/main/YOYO-O1-32B-V4-preview4.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-32B-V4-preview4-GGUF/resolve/main/YOYO-O1-32B-V4-preview4.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-32B-V4-preview4-GGUF/resolve/main/YOYO-O1-32B-V4-preview4.Q4_K_M.gguf) | Q4_K_M | 19.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-32B-V4-preview4-GGUF/resolve/main/YOYO-O1-32B-V4-preview4.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-32B-V4-preview4-GGUF/resolve/main/YOYO-O1-32B-V4-preview4.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-32B-V4-preview4-GGUF/resolve/main/YOYO-O1-32B-V4-preview4.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-32B-V4-preview4-GGUF/resolve/main/YOYO-O1-32B-V4-preview4.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mlfoundations-dev/d1_science_shortest_3k | mlfoundations-dev | 2025-04-28T07:49:06Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T07:46:42Z | ---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: d1_science_shortest_3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d1_science_shortest_3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_science_shortest_3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 6
- total_train_batch_size: 96
- total_eval_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0a0+ecf3bae40a.nv25.01
- Datasets 3.5.0
- Tokenizers 0.20.3
|
edvenswa/ICD-COT-100-reasoning-Test-67-mistral-8b | edvenswa | 2025-04-28T07:49:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.1-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.1-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T07:48:58Z | ---
base_model: unsloth/mistral-7b-instruct-v0.1-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** edvenswa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.1-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Tesslate/Gradience-T1-7B-Preview | Tesslate | 2025-04-28T07:48:34Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:Tesslate/Gradient-Reasoning",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-12T18:33:20Z | ---
library_name: transformers
license: apache-2.0
datasets:
- Tesslate/Gradient-Reasoning
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-7B-Instruct
---
# Model Card for Gradience-T1-7B
This model is still in preview/beta. We're still working on it! This is just so the community can try out our new "Gradient Reasoning" that intends to break problems down and reason faster.
You can use a system prompt to enable thinking:
"First, think step-by-step to reach the solution. Enclose your entire reasoning process within <|begin_of_thought|> and <|end_of_thought|> tags."
You can try sampling params:
Temp: 0.76, TopP: 0.62, Topk 30-68, Rep: 1.0, minp: 0.05 |
qingy2024/Gradience-T1-7B-checkpoint | qingy2024 | 2025-04-28T07:48:33Z | 21 | 0 | peft | [
"peft",
"safetensors",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"region:us"
] | null | 2025-04-12T01:00:49Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: peft
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Gradience T1 7B (Step 4918 Checkpoint)
> [!NOTE]
> Training in progress...
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Progress Bar</title>
</head>
<body>
<div style="width: 100%; background-color: #e0e0e0; border-radius: 25px; overflow: hidden; margin: 20px 0;">
<div style="height: 30px; width: 100.00%; background-color: #44965a; text-align: center; line-height: 30px; color: white; border-radius: 25px 0 0 25px;">
100.0%
</div>
</div>
<p style="font-family: Arial, sans-serif; font-size: 16px;">Progress: 4918 out of 4918 steps</p>
</body>
</html>
## Training Loss
 |
qingy2024/QwEnlarge-16B-Instruct | qingy2024 | 2025-04-28T07:48:33Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-06T22:53:09Z | ---
base_model:
- Qwen/Qwen2.5-14B-Instruct
library_name: transformers
tags:
- mergekit
- merge
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the Passthrough merge method.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- layer_range: [0, 20]
model: Qwen/Qwen2.5-14B-Instruct
- sources:
- layer_range: [18, 24]
model: Qwen/Qwen2.5-14B-Instruct
- sources:
- layer_range: [22, 48]
model: Qwen/Qwen2.5-14B-Instruct
merge_method: passthrough
dtype: bfloat16
```
|
qingy2024/PR2-14B-Instruct | qingy2024 | 2025-04-28T07:48:30Z | 9 | 1 | null | [
"safetensors",
"qwen2",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:qingy2024/PR2-SFT",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-03-06T22:39:15Z | ---
license: apache-2.0
datasets:
- qingy2024/PR2-SFT
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-14B-Instruct
---
## PR2 14B Instruct
PR2 is an experimental fine tune of Qwen2.5 14B Instruct, incorporating planning and reasoning before outputting a final answer. It is trained for 2 epochs on the verified subset of qingy2024/PR2-SFT.
**Example:**
Input:
```
Find the minimum value of the expression \\(\left(\sqrt{2(1+\cos 2 x)}-\sqrt{36-4 \sqrt{5}} \sin x+2\right) \cdot \left(3+2 \sqrt{10-\sqrt{5}} \cos y-\cos 2 y\right)\\). If the answer is not an integer, round it to the nearest integer.
First, create a high level overview in <plan></plan> tags, then do your reasoning in <think></think> tags, and put your final answer in <answer></answer>.
```
Output:
```
<plan>
1. Simplify the expression by breaking it down into manageable parts.
2. Use trigonometric identities to simplify terms involving trigonometric functions.
3. Analyze the structure of the expression to identify potential optimization strategies.
4. Consider the possibility of minimizing each part of the expression separately.
5. Calculate the minimum value of the entire expression and round it to the nearest integer if necessary.
</plan>
<think>
Hmm... let me start by simplifying the expression. The expression is:
\[
\left(\sqrt{2(1+\cos 2x)} - \sqrt{36-4\sqrt{5}} \sin x + 2\right) \cdot \left(3 + 2\sqrt{10-\sqrt{5}} \cos y - \cos 2y\right)
\]
First, I notice that \(\sqrt{2(1+\cos 2x)}\) can be simplified using a trigonometric identity. Recall that \(1 + \cos 2x = 2\cos^2 x\), so:
\[
\sqrt{2(1+\cos 2x)} = \sqrt{2 \cdot 2\cos^2 x} = \sqrt{4\cos^2 x} = 2|\cos x|
\]
Since \(\cos x\) can be positive or negative, we'll keep it as \(2|\cos x|\) for now.
Next, let's look at the second part of the first factor: \(-\sqrt{36-4\sqrt{5}} \sin x + 2\). This seems a bit complicated, but perhaps we can find a way to simplify or bound it.
Now, let's consider the second factor: \(3 + 2\sqrt{10-\sqrt{5}} \cos y - \cos 2y\). Again, we can use trigonometric identities to simplify \(\cos 2y\). Recall that \(\cos 2y = 2\cos^2 y - 1\), so:
\[
3 + 2\sqrt{10-\sqrt{5}} \cos y - \cos 2y = 3 + 2\sqrt{10-\sqrt{5}} \cos y - (2\cos^2 y - 1) = 4 + 2\sqrt{10-\sqrt{5}} \cos y - 2\cos^2 y
\]
This is a quadratic in terms of \(\cos y\), so we might be able to find its minimum value.
Wait, perhaps it's better to think about minimizing each factor separately and then multiply the minima together. Let's try that.
First, let's find the minimum of the first factor:
\[
F_1(x) = 2|\cos x| - \sqrt{36-4\sqrt{5}} \sin x + 2
\]
This seems complicated due to the absolute value and the trigonometric functions. Maybe we can express it in terms of a single trigonometric function.
Alternatively, perhaps we can use calculus to find the minimum of \(F_1(x)\). Let's try that.
Let's consider \(F_1(x) = 2|\cos x| - \sqrt{36-4\sqrt{5}} \sin x + 2\). To find the minimum, we can take the derivative and set it to zero.
However, the absolute value makes this tricky. Perhaps we can consider the cases where \(\cos x\) is positive and negative separately.
Case 1: \(\cos x \geq 0\)
Then \(F_1(x) = 2\cos x - \sqrt{36-4\sqrt{5}} \sin x + 2\). Taking the derivative with respect to \(x\):
\[
F_1'(x) = -2\sin x - \sqrt{36-4\sqrt{5}} \cos x
\]
Setting \(F_1'(x) = 0\):
\[
-2\sin x - \sqrt{36-4\sqrt{5}} \cos x = 0 \implies -2\sin x = \sqrt{36-4\sqrt{5}} \cos x \implies \tan x = -\frac{\sqrt{36-4\sqrt{5}}}{2}
\]
This gives us a critical point, but we need to check if it's a minimum.
Alternatively, perhaps we can use trigonometric identities to express \(F_1(x)\) as a single trigonometric function.
Recall that \(A\sin x + B\cos x = R\sin(x + \alpha)\), where \(R = \sqrt{A^2 + B^2}\) and \(\tan \alpha = \frac{B}{A}\).
In our case, \(A = -\sqrt{36-4\sqrt{5}}\) and \(B = 2\), so:
\[
R = \sqrt{(-\sqrt{36-4\sqrt{5}})^2 + 2^2} = \sqrt{36-4\sqrt{5} + 4} = \sqrt{40-4\sqrt{5}} = 2\sqrt{10-\sqrt{5}}
\]
So, \(F_1(x) = 2\sqrt{10-\sqrt{5}} \sin(x + \alpha) + 2\).
The minimum value of \(\sin(x + \alpha)\) is \(-1\), so the minimum of \(F_1(x)\) is:
\[
2\sqrt{10-\sqrt{5}} \cdot (-1) + 2 = -2\sqrt{10-\sqrt{5}} + 2
\]
Now, let's find the minimum of the second factor:
\[
F_2(y) = 4 + 2\sqrt{10-\sqrt{5}} \cos y - 2\cos^2 y
\]
This is a quadratic in \(\cos y\). Let \(z = \cos y\), then:
\[
F_2(y) = -2z^2 + 2\sqrt{10-\sqrt{5}} z + 4
\]
This is a quadratic equation in \(z\), and since the coefficient of \(z^2\) is negative, it has a maximum value. However, we need the minimum value of \(F_2(y)\). The minimum will occur at the boundaries of \(z\), i.e., when \(z = 1\) or \(z = -1\).
Let's evaluate \(F_2(y)\) at \(z = 1\) and \(z = -1\).
At \(z = 1\):
\[
F_2(y) = -2(1)^2 + 2\sqrt{10-\sqrt{5}} (1) + 4 = -2 + 2\sqrt{10-\sqrt{5}} + 4 = 2 + 2\sqrt{10-\sqrt{5}}
\]
At \(z = -1\):
\[
F_2(y) = -2(-1)^2 + 2\sqrt{10-\sqrt{5}} (-1) + 4 = -2 - 2\sqrt{10-\sqrt{5}} + 4 = 2 - 2\sqrt{10-\sqrt{5}}
\]
Since \(\sqrt{10-\sqrt{5}} > 0\), the minimum value of \(F_2(y)\) is \(2 - 2\sqrt{10-\sqrt{5}}\).
Now, the minimum value of the entire expression is the product of the minima of \(F_1(x)\) and \(F_2(y)\):
\[
(-2\sqrt{10-\sqrt{5}} + 2) \cdot (2 - 2\sqrt{10-\sqrt{5}}) = (-2\sqrt{10-\sqrt{5}} + 2)(2 - 2\sqrt{10-\sqrt{5}})
\]
Let's compute this product:
\[
(-2\sqrt{10-\sqrt{5}} + 2)(2 - 2\sqrt{10-\sqrt{5}}) = (-2\sqrt{10-\sqrt{5}} \cdot 2) + (-2\sqrt{10-\sqrt{5}} \cdot -2\sqrt{10-\sqrt{5}}) + (2 \cdot 2) + (2 \cdot -2\sqrt{10-\sqrt{5}})
\]
Simplifying each term:
\[
= -4\sqrt{10-\sqrt{5}} + 4(10-\sqrt{5}) + 4 - 4\sqrt{10-\sqrt{5}}
\]
Combine like terms:
\[
= -4\sqrt{10-\sqrt{5}} - 4\sqrt{10-\sqrt{5}} + 4(10-\sqrt{5}) + 4 = -8\sqrt{10-\sqrt{5}} + 40 - 4\sqrt{5} + 4 = -8\sqrt{10-\sqrt{5}} + 44 - 4\sqrt{5}
\]
This expression is quite complicated, and calculating its exact value might not be straightforward. However, we can approximate the value.
First, let's approximate \(\sqrt{10 - \sqrt{5}}\). Since \(\sqrt{5} \approx 2.236\), we have:
\[
\sqrt{10 - 2.236} = \sqrt{7.764} \approx 2.786
\]
Similarly, \(\sqrt{5} \approx 2.236\).
Now, let's approximate the expression:
\[
-8 \cdot 2.786 + 44 - 4 \cdot 2.236 = -22.288 + 44 - 8.944 = 12.768
\]
Rounding this to the nearest integer gives us 13.
Wait, let's double-check the calculations to ensure accuracy.
First, let's compute \(\sqrt{10 - \sqrt{5}}\) more precisely.
\(\sqrt{5} \approx 2.2360679775\), so:
\[
\sqrt{10 - 2.2360679775} = \sqrt{7.7639320225} \approx 2.786
\]
Now, compute the expression:
\[
-8 \cdot 2.786 + 44 - 4 \cdot 2.236 = -22.288 + 44 - 8.944 = 12.768
\]
Rounding 12.768 to the nearest integer gives us 13.
Thus, the minimum value of the expression is approximately 13.
</think>
<answer>
\boxed{13}
</answer>
``` |
WICKED4950/BwETAF-IID-100M | WICKED4950 | 2025-04-28T07:46:55Z | 0 | 0 | null | [
"text-generation",
"en",
"dataset:WICKED4950/Raw-GPT-traindata",
"license:mit",
"region:us"
] | text-generation | 2025-04-08T11:57:22Z | ---
license: mit
datasets:
- WICKED4950/Raw-GPT-traindata
language:
- en
metrics:
- perplexity
pipeline_tag: text-generation
---
# **BwETAF-IID-100M**
**Boring’s Experimental Transformer for Autoregression (Flax)** — A 100M parameter autoregressive model built in Flax. Lightweight, chaotic, and surprisingly good (I mean ok).
Because who needs sanity when you’ve got tokens to predict?
**Trained on determination, fueled by suffering, powered by free TPUs. 🔥**
---
## 🛠️ **Model Specs**
- **Parameters**: ~100M
- **Context Window**: 512 tokens
- **Dataset**: almost on 10M raw sentences (with the first 5M for a second epoch)(`WICKED4950/Raw-GPT-traindata`) Or a total of about 7.6B tokens
- **Architecture**: Custom Transformer
- **Tokenizer**: GPT-2
- **Trainer**: Hand-coded, Czz... Why not?
- **Final Val loss**: Almost at 3.15
---
## Why BwETAF?
- 🚀 **Built for experimentation**: Mess with the architecture guilt-free.
- ⚡ **JAX/Flax optimized**: Designed for TPU efficiency (no PyTorch bloat!).
- 🎓 **Educational focus**: Learn how transformers work under the hood.
- 💻 **Runs on potato hardware**: 100M params = no $10k GPU needed.
---
## 🚀 TPU-Optimized Training Pipeline (Proprietary)
This model was trained using a **custom JAX/Flax pipeline** optimized for free Google TPUs.
- Trains 400M-parameter models on free TPUs (batch size ~32, ~177hrs). (In bf16)
- Has checkpointing, saving, loading, graph plotting, Tokenization functions, Custom dataset formats for less TPU ram usage and an optimized trainer for BwETAF models
- has a ready to use functions for anyone without touching the core part of how the model works
Interested in the tech? Contact me for consulting/licensing.
---
## ⚡ **Quickstart**
Use ``` pip install BwETAF``` to install it.
** It does not include a Trainer**
```python
import BwETAF
# You can use this function for quick testing of the model
prompt = "The meaning of life is"
output = BwETAF.SetUpAPI(prompt, "WICKED4950/BwETAF-IID-100M")
print(output) # Example: "The meaning of life is... (model's actual output)"
# Load from Hugging Face
model = BwETAF.load_hf("WICKED4950/BwETAF-IID-100M")
# Load from local directory
BwETA.load_model("path/to/model")
# Save locally
model.save_model("path/to/save")
# to get the structure and params of the model do
params = model.trainable_variables
structure = model.model_struct
```
[Open an google collab notebook](https://colab.research.google.com/drive/1v6OslzWDc1TOFwn9B2X3O_LM3J5WD4zC?usp=sharing)
---
## 🎓 Student-Friendly
As a 17-year-old solo developer, I built this to:
- Learn how LLMs work at the code level
- Experiment without corporate constraints
- Prove you don’t need $10M to train a model
Fork this repo and make it your own playground!
---
## 💬 **Important Notes**
- This is **experimental**—expect weird bugs and cooler features.
- It’s meant to be extended and hacked on. Go wild.
- If it crashes, don't panic...
---
## 📩 **Reach Out**
If you got anything to talk realted to this... Contact me at [Instagram](https://www.instagram.com/boring._.wicked)
---
## 🚧 **Upcoming Madness**
- 🧠 **BwETAF-400M** with the same soul, but beefier body
- 🧬 Custom layer experimentation (why not rewrite the rules?)
- 🫠 Sanity?
|
tanthinhdt/Cytotoxicity-Nanoparticles_Llamma-3.1_20250428-025953 | tanthinhdt | 2025-04-28T07:46:29Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:skumar9/Llama-medx_v3.2",
"base_model:finetune:skumar9/Llama-medx_v3.2",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T19:59:57Z | ---
library_name: transformers
base_model: skumar9/Llama-medx_v3.2
tags:
- generated_from_trainer
model-index:
- name: Cytotoxicity-Nanoparticles_Llamma-3.1_20250428-025953
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Cytotoxicity-Nanoparticles_Llamma-3.1_20250428-025953
This model is a fine-tuned version of [skumar9/Llama-medx_v3.2](https://huggingface.co/skumar9/Llama-medx_v3.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 120.9954
- eval_r_squared: 0.88
- eval_matthews_correlation: 0.8320
- eval_accuracy: 0.9157
- eval_runtime: 18.347
- eval_samples_per_second: 9.702
- eval_steps_per_second: 4.851
- epoch: 50.0
- step: 39950
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Harry66/a2c-PandaReachDense-v3 | Harry66 | 2025-04-28T07:45:17Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-28T07:41:05Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.21 +/- 0.12
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Gonnn12/monday_april | Gonnn12 | 2025-04-28T07:44:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_vl",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T07:44:15Z | ---
base_model: unsloth/qwen2-vl-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Gonnn12
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-vl-7b-instruct-unsloth-bnb-4bit
This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
NamHaiDo/MindMate-4-90M-Bilingual-1 | NamHaiDo | 2025-04-28T07:44:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"text-generation",
"en",
"vi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-20T10:32:51Z | ---
base_model: unsloth/llama-3.2-1b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
- vi
pipeline_tag: text-generation
---
# Uploaded model
- **Developed by:** NamHaiDo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
HenryNitro/RachelBot3.0 | HenryNitro | 2025-04-28T07:42:55Z | 1 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T06:31:28Z | ---
license: apache-2.0
---
|
gaianet/Qwen2-VL-7B-Instruct-GGUF | gaianet | 2025-04-28T07:42:46Z | 60 | 2 | transformers | [
"transformers",
"gguf",
"qwen2_vl",
"image-text-to-text",
"multimodal",
"en",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:quantized:Qwen/Qwen2-VL-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2024-12-15T07:53:50Z | ---
base_model: Qwen/Qwen2-VL-7B-Instruct
license: apache-2.0
model_creator: Qwen
model_name: Qwen2-VL-7B-Instruct
quantized_by: Second State Inc.
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
library_name: transformers
---
# Qwen2-VL-7B-Instruct-GGUF
## Original Model
[Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct)
## Run with Gaianet
**Prompt template:**
prompt template: `qwen2-vision`
**Context size:**
chat_ctx_size: `32000`
**Run with GaiaNet:**
- Quick start: https://docs.gaianet.ai/node-guide/quick-start
- Customize your node: https://docs.gaianet.ai/node-guide/customize
*Quantized with llama.cpp b4329* |
mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-GGUF | mradermacher | 2025-04-28T07:41:51Z | 58 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:adamo1139/Yi-34B-200K-HESOYAM-TURTLE-0208-4CHAN",
"base_model:quantized:adamo1139/Yi-34B-200K-HESOYAM-TURTLE-0208-4CHAN",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-08-03T02:17:44Z | ---
base_model: adamo1139/Yi-34B-200K-HESOYAM-TURTLE-0208-4CHAN
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/adamo1139/Yi-34B-200K-HESOYAM-TURTLE-0208-4CHAN
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.Q2_K.gguf) | Q2_K | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.IQ3_XS.gguf) | IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.Q3_K_S.gguf) | Q3_K_S | 15.1 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.IQ3_M.gguf) | IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.Q3_K_L.gguf) | Q3_K_L | 18.2 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.IQ4_XS.gguf) | IQ4_XS | 18.7 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.Q5_K_S.gguf) | Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.Q5_K_M.gguf) | Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.Q6_K.gguf) | Q6_K | 28.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF | mradermacher | 2025-04-28T07:41:10Z | 91 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:adamo1139/Yi-34B-200K-HESOYAM-TURTLE-0208-4CHAN",
"base_model:quantized:adamo1139/Yi-34B-200K-HESOYAM-TURTLE-0208-4CHAN",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-08-03T16:07:50Z | ---
base_model: adamo1139/Yi-34B-200K-HESOYAM-TURTLE-0208-4CHAN
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/adamo1139/Yi-34B-200K-HESOYAM-TURTLE-0208-4CHAN
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-IQ1_S.gguf) | i1-IQ1_S | 7.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-IQ1_M.gguf) | i1-IQ1_M | 8.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-IQ2_S.gguf) | i1-IQ2_S | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-IQ2_M.gguf) | i1-IQ2_M | 11.9 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-Q2_K.gguf) | i1-Q2_K | 12.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-IQ3_S.gguf) | i1-IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-IQ3_M.gguf) | i1-IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-IQ4_XS.gguf) | i1-IQ4_XS | 18.6 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-Q4_0.gguf) | i1-Q4_0 | 19.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-Q4_K_S.gguf) | i1-Q4_K_S | 19.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-Q5_K_S.gguf) | i1-Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-Q5_K_M.gguf) | i1-Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-Q6_K.gguf) | i1-Q6_K | 28.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
annemiekebickleyoy/1afef6d1-9ecf-4e48-987e-dd3cec5e9b8c | annemiekebickleyoy | 2025-04-28T07:39:15Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:huggyllama/llama-7b",
"base_model:adapter:huggyllama/llama-7b",
"region:us"
] | null | 2025-04-28T07:38:56Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: huggyllama/llama-7b
model-index:
- name: annemiekebickleyoy/1afef6d1-9ecf-4e48-987e-dd3cec5e9b8c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# annemiekebickleyoy/1afef6d1-9ecf-4e48-987e-dd3cec5e9b8c
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
emmans2004/ccset-chatbot-dialoGPT | emmans2004 | 2025-04-28T07:38:58Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:microsoft/DialoGPT-small",
"base_model:finetune:microsoft/DialoGPT-small",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T07:06:42Z | ---
library_name: transformers
license: mit
base_model: microsoft/DialoGPT-small
tags:
- generated_from_trainer
model-index:
- name: ccset-chatbot-dialoGPT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ccset-chatbot-dialoGPT
This model is a fine-tuned version of [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Tokenizers 0.21.1
|
mukel/Qwen2.5-3B-Instruct-GGUF | mukel | 2025-04-28T07:37:33Z | 28 | 0 | null | [
"gguf",
"chat",
"qwen",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-23T00:22:53Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct-GGUF/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-3B-Instruct
pipeline_tag: text-generation
quantized_by: mukel
tags:
- chat
- qwen
---
# GGUF models for qwen2.java
Pure .gguf Q4_0 and Q8_0 quantizations of Qwen 2.5 models, ready to consume by `qwen2.java`.
In the wild, Q8_0 quantizations are fine, but Q4_0 quantizations are rarely pure e.g. the token embeddings are quantized with Q6_K, instead of Q4_0.
A pure Q4_0 quantization can be generated from a high precision (F32, F16, BFLOAT16) .gguf source with the llama-quantize utility from llama.cpp as follows:
```
./llama-quantize --pure ./Qwen-2.5-7B-Instruct-BF16.gguf ./Qwen-2.5-7B-Instruct-Q4_0.gguf Q4_0
```
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
|
AhmedLet/Qwen_0.5_python_codes_mbpp | AhmedLet | 2025-04-28T07:37:20Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"gguf",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:google-research-datasets/mbpp",
"dataset:mlabonne/FineTome-100k",
"dataset:MohamedSaeed-dev/python_dataset_codes",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-28T08:37:32Z | ---
base_model:
- Qwen/Qwen2.5-0.5B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
datasets:
- google-research-datasets/mbpp
- mlabonne/FineTome-100k
- MohamedSaeed-dev/python_dataset_codes
---
# Uploaded model
- **Developed by:** AhmedLet
- **License:** apache-2.0 |
Subsets and Splits