modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
luckeciano/Qwen-2.5-7B-GRPO-Base-4Action_774
luckeciano
2025-05-31T12:22:35Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:DigitalLearningGmbH/MATH-lighteval", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:finetune:Qwen/Qwen2.5-Math-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-31T09:54:23Z
--- base_model: Qwen/Qwen2.5-Math-7B datasets: DigitalLearningGmbH/MATH-lighteval library_name: transformers model_name: Qwen-2.5-7B-GRPO-Base-4Action_384 tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for Qwen-2.5-7B-GRPO-Base-4Action_384 This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-Base-4Action_384", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/idh77b6y) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.4.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Baselhany/Distilation_Whisper_base_bigger_batch_size
Baselhany
2025-05-31T12:22:05Z
10
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ar", "base_model:openai/whisper-base", "base_model:finetune:openai/whisper-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-05-21T20:01:06Z
--- library_name: transformers language: - ar license: apache-2.0 base_model: openai/whisper-base tags: - generated_from_trainer metrics: - wer model-index: - name: Whisper base AR - BA results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper base AR - BA This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the quran-ayat-speech-to-text dataset. It achieves the following results on the evaluation set: - Loss: 0.1185 - Wer: 0.2529 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-------:|:----:|:---------------:|:------:| | 7.591 | 1.0 | 188 | 0.1236 | 0.2903 | | 6.3942 | 2.0 | 376 | 0.1248 | 0.2700 | | 5.1675 | 3.0 | 564 | 0.1272 | 0.3061 | | 4.1369 | 4.0 | 752 | 0.1242 | 0.2557 | | 3.42 | 5.0 | 940 | 0.1199 | 0.2605 | | 2.9304 | 6.0 | 1128 | 0.1201 | 0.2437 | | 2.6141 | 7.0 | 1316 | 0.1195 | 0.2443 | | 2.2745 | 8.0 | 1504 | 0.1177 | 0.2448 | | 2.1319 | 9.0 | 1692 | 0.1173 | 0.2402 | | 1.9556 | 10.0 | 1880 | 0.1174 | 0.2530 | | 1.7922 | 11.0 | 2068 | 0.1165 | 0.2373 | | 1.7604 | 12.0 | 2256 | 0.1164 | 0.2340 | | 1.6353 | 13.0 | 2444 | 0.1151 | 0.2340 | | 1.5943 | 14.0 | 2632 | 0.1150 | 0.2336 | | 1.5228 | 14.9227 | 2805 | 0.1151 | 0.2333 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
Zillis/2025_PAAMA_MODEL_2_AIYU
Zillis
2025-05-31T12:21:12Z
0
0
null
[ "license:unknown", "region:us" ]
null
2025-05-11T12:08:02Z
--- license: unknown --- 2025_PAAMA_MODEL_2_YUK ODI_SIDE.CREAM.TEN.STAND.NSFWE14 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/u12ibnM-Gn3izYuO1pe6K.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/OaWsXwsNIz8rjY8YUl6bT.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/JZ4nMVv3rBAUQhJM65vWj.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/49APvT9ApwqbCs73DaiGi.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/-KD4hZ7xeLn8_3dTbpp0H.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/FNrF9UIhx_m7CUAcKOSGt.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/BzafefT6_N8bpKzzYon-h.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Lp-5O1uqHCSepT_NWc_vg.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/QR_nBHdv18i0SvpeBhF6X.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/eOE9L-_-GMnAOzmN27syF.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/FWsIzLW5ogSTOyHOfaY0W.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/BSTKd6fsaUxLtqKgmtp8N.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/YKHKvlLPWkuF2kCoy0FIO.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/DDaA3j3PoXxGb_YSHtbZE.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/591wpsfGzDCjvp5lonu-7.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/4kcATIoUvKB8eqJ1AMV3D.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/hjjEwTVoAikM84kMxro-8.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/vh36545RrWQqZK3PSZVqQ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/FWxCKQxT7RhGWLZ150YC9.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/uA5wXopr1jvELn1hSdbrJ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/guAaqrdQGTtpUJqZhS_q_.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/91z6qrnsi7JTxHJPnMUb7.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/FIwOzUlcZAr4pPHmtn7Ki.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/5cS4FtoJjxavGgkoAbRAh.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Pgj_A6iG7fLrnzNAOEFzl.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/V6i7UEPNX7j6GnXm-De8w.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/UhWi1PXpGye3CvanRYfl-.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/eIJCpN_dTN364awVvY2cL.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/bpO8zP3pecjH3gBHlAEyF.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/cRc6kJCsigvcjuPIZV0GA.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/zthEi5blnocDbFSde_2lG.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/_f4Qnye8hdYHbPXX4Nkxd.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/08vh1kSB9Fe16xW9YvEIt.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/oChdFSPxoVU8WcPtlx2Dc.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/fYp8gPXEHhf6N01RYN1Pd.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/crGQPCpk0RZ2CqvpdYeuC.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/6624BahHX4UZs-dJJgXdH.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/I9i6xtSH6_FdlLdzOOSFf.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Vx46a5jVClrFQvDlC242R.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/KRtSEPwWdhjt4CbQLCouF.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/OCLzWt2hBE3EjlUJWkAy9.png) 2025_PAAMA_MODEL_2_AIYU_ntm.fp16.safetensors ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/w1866ZVoNxRejx2K3nJI2.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/_tHecIrQKLcJ4p-FDPshd.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/eH7rlU11bP_Ivg_x_FHGl.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/t91c63WwRFgpg89qaienY.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/IHbx4MYm9b_-0ye8qYW45.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/lpB0tY-xjV32cCdr7IjuI.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/4z9wcCWvdBnhuQd3jJoJF.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/1oyFTI-txWzaI3IxB4Ne-.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/_r2h4UNtbFgu7-6YmKcII.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/7saDqF-SM9t-naU2YZAcz.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/QedQY3GUoRDY1K4rCJFc_.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/lQ3tSvrAM1anZm9BiFswY.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/xLW2dQ0Wyrt47HIViXJp_.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/c8vBX8v5znegzVJqIETFt.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ewILTH6P_XbX6jUqyBs1N.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/RJmEbfp6kMBTHK2uLKcgP.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/yKKwCBUDT-MHL0m-TO4AV.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ykk4oyxciFyNds9G15tgu.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Lj9Wj30kKdysufaE2huh7.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/x1rc2aIibJrPviEgx59L-.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/XBjMZEsYcUHn4XfKpHwJ1.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/YYOwBXHGl15ZcF-DINvrB.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Ui2N4sz63GUgZLApy1ikQ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/R58JHGXSr1pH5zoANMy8k.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/z_6_O1ZCibQMJnQn5vpGI.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/CxxOs5HTcHrePKF2tAOrP.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/pyoPHBD56yyA47e_GfNew.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/julNCk8Ei99Mn5iLHOJdZ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/S2xdNy3Xq-NWAudNpmLtw.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Hec9lXTzCMk_8-FWzDFm2.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Orb9rtDZGCIsJ6rmqcPqz.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/f_fm04t3aw9aLlwm2AumR.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/s_dTa__4oq7xrenN5aocL.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/TEWgVRyEeDuDAuaus_FJM.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/kfPzvft6GA67NSlGTC3uI.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/BeE8qQCR2MY8aWiQrYbRk.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/tdE2qKz65Bs3Z5egwChRo.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ERRegQMq2hkL7SqVI0G5K.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/AigqWQqyT_JlZpt0Y3zJM.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/YUrFzolIj_AhZ0jnDo0zD.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/xCWp8SoH14nsqHU1BeFqU.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/MBR7AwRgMSvWW24FI3VIL.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/_1xu9kEOc3ffdf4x6oS9q.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/lkyrFYpHzklvIye_le6fX.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/K-8ugQwOMvXZaNGE29HWT.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/RR_XAkaRH4AOOZdL-7oXD.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/mNAS08v8mS-bQtevpHLRu.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/S0EsBzTsONFKUVvolPgg7.png) 2025_PAAMA_MODEL_2_65D ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/c4J3KxMQx6ybWF2X5KrGC.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/lvFtSIyhsqkon8dKo85Le.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/CqSNMJU55rnizrqX5hHUk.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/5QIq-MwjTy9fdkm9H_low.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/lmCPQu1OabZPxoxjmVagq.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/3Yvh_3l0quAz8p2ZQnYpb.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/tVMoyPBIuqANKVISnkYSg.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/mXfIcTEyfM-so-m37Gg7W.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/tsbNZqj9NgLJLIC8UYVeg.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/heMKZVDQuUkKSg47bQoZI.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/1yffQGJtnkJ8u-RXue3aJ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/_hsS7TjlDjaM93EvX217q.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/KX2iZ83t2BRQCFG2A7cIE.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Qk7qO2pqJvWF3iuiuYy4D.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/2KMHRZgx2tWwsV0jR9h1s.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/w8fbjEuJD3TjSQZPjdOZ4.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/K82AVhLpVkAvdQun9kmxO.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/DmyyhFOisjFKoaHSyKHId.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/gX9rvXW7rZt7q950IAFHr.png) 2025_PAAMA_MODEL_2_AIYU.fp16.safetensors ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/hek_ASNjNp-ZoOhTaSURN.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/yT8p8vEs_BzGGF_fDRVD-.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/CouqfR5d-PKfA5Da_1rpL.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/F15hA2qY2dYsj9BV3UZZ_.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ZuAYH3yKX8qL2v6AbZWbj.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/7DO4y4BDMhkD--juENYX4.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/5mSxw4cw7JxcVQ25HsI-n.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Bi98rM_r7YTRxcDATFjeX.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/68howKZIUk-jzC4FeMnKP.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/fkVnjb-e01dMJ9CTiHzUA.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/KPCY2kuQ_Aed04lKBx0xy.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/MWagE6VwoEX3tRgNDFmb9.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ei4ZcOc_Czv9CVHqqI8HY.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/g8U1_WX1if-7IowNraKh7.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/EvEXBmpZ2n0kZFkef9zvo.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/lJ2QW3IFdwYyh6s1hKv47.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/hZpP-SUyEjxlME8119GkO.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/OL96_kpHtxVPX9TYFSTVS.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/cq3uHIQ1zJMHs2-m_Q4Up.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/I2okZDwS3YeuRltLmpSHi.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/FCbPMzLoLAdImwUdGB9g2.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/UlWiA9G8KhGWAlh7XcpII.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/7DXtM8YBy0bcw5qhotcDH.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/YjBzHig07gHaqUGtT1ngl.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/XHsxdtf_mHPCw-sM5acaa.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Akh4GACD-806_h4D2W8AF.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/a8BJ1GyOph2M8pH7OH0oS.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/sAXWcf0vGer8zfELCDEEZ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/4o52jc6EuuD---w6HntxT.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/6miuPs21k-zenBigndEmj.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/EbBOkzSjnUOn-OOh214fE.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/sDgltkw5h6IIllKGQf_WX.png)
NORI7/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-savage_arctic_raven
NORI7
2025-05-31T12:21:09Z
21
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am savage arctic raven", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-07T23:42:49Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-savage_arctic_raven tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am savage arctic raven - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-savage_arctic_raven This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="NORI7/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-savage_arctic_raven", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/fujiyama-kazunori-personal/huggingface/runs/g17r40up) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
huoyanyan/Machina_24B.V2-Q4_K_M-GGUF
huoyanyan
2025-05-31T12:20:43Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "dark", "roleplay", "negative", "llama-cpp", "gguf-my-repo", "en", "ru", "base_model:OddTheGreat/Machina_24B.V2", "base_model:quantized:OddTheGreat/Machina_24B.V2", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-31T12:19:34Z
--- base_model: OddTheGreat/Machina_24B.V2 library_name: transformers tags: - mergekit - merge - dark - roleplay - negative - llama-cpp - gguf-my-repo language: - en - ru --- # huoyanyan/Machina_24B.V2-Q4_K_M-GGUF This model was converted to GGUF format from [`OddTheGreat/Machina_24B.V2`](https://huggingface.co/OddTheGreat/Machina_24B.V2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/OddTheGreat/Machina_24B.V2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo huoyanyan/Machina_24B.V2-Q4_K_M-GGUF --hf-file machina_24b.v2-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo huoyanyan/Machina_24B.V2-Q4_K_M-GGUF --hf-file machina_24b.v2-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo huoyanyan/Machina_24B.V2-Q4_K_M-GGUF --hf-file machina_24b.v2-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo huoyanyan/Machina_24B.V2-Q4_K_M-GGUF --hf-file machina_24b.v2-q4_k_m.gguf -c 2048 ```
jmqcooper/llama-7b-qlora-mmlu-stem
jmqcooper
2025-05-31T12:20:26Z
2
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-05-30T16:15:14Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer datasets: - generator model-index: - name: llama-7b-qlora-mmlu-stem results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama-7b-qlora-mmlu-stem This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Framework versions - PEFT 0.15.2 - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 2.15.0 - Tokenizers 0.21.1
igorcouto/sofya-telephony-pt-500h
igorcouto
2025-05-31T12:20:14Z
0
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-05-31T12:08:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Umbrellat/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-shrewd_extinct_turtle
Umbrellat
2025-05-31T12:20:04Z
18
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am shrewd extinct turtle", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-16T03:10:55Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-shrewd_extinct_turtle tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am shrewd extinct turtle - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-shrewd_extinct_turtle This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Umbrellat/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-shrewd_extinct_turtle", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Admity/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-sizable_screeching_gull
Admity
2025-05-31T12:19:58Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am sizable screeching gull", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-28T21:06:14Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-sizable_screeching_gull tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am sizable screeching gull - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-sizable_screeching_gull This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Admity/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-sizable_screeching_gull", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Ciganov/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-opaque_thorny_anaconda
Ciganov
2025-05-31T12:19:17Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am opaque thorny anaconda", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-24T00:24:34Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-opaque_thorny_anaconda tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am opaque thorny anaconda - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-opaque_thorny_anaconda This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Ciganov/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-opaque_thorny_anaconda", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
inu878h/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-invisible_smooth_alligator
inu878h
2025-05-31T12:18:37Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am invisible smooth alligator", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-01T03:31:25Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-invisible_smooth_alligator tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am invisible smooth alligator - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-invisible_smooth_alligator This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="inu878h/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-invisible_smooth_alligator", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Masha34/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-camouflaged_placid_ferret
Masha34
2025-05-31T12:18:32Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am camouflaged placid ferret", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-25T00:01:24Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-camouflaged_placid_ferret tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am camouflaged placid ferret - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-camouflaged_placid_ferret This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Masha34/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-camouflaged_placid_ferret", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Oceans-ID/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-deadly_mighty_wolf
Oceans-ID
2025-05-31T12:18:14Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am deadly mighty wolf", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-31T09:17:36Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-deadly_mighty_wolf tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am deadly mighty wolf - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-deadly_mighty_wolf This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Oceans-ID/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-deadly_mighty_wolf", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Mutly/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grazing_slow_stork
Mutly
2025-05-31T12:17:55Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am grazing slow stork", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-23T22:36:16Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grazing_slow_stork tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am grazing slow stork - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grazing_slow_stork This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Mutly/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grazing_slow_stork", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
haedahae/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-huge_domestic_cow
haedahae
2025-05-31T12:17:37Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am huge domestic cow", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-30T02:58:01Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-huge_domestic_cow tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am huge domestic cow - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-huge_domestic_cow This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="haedahae/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-huge_domestic_cow", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/samhaejoda-samsada/huggingface/runs/618fu67p) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Plitak/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scaly_extinct_squirrel
Plitak
2025-05-31T12:17:09Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am scaly extinct squirrel", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-23T21:59:01Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scaly_extinct_squirrel tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am scaly extinct squirrel - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scaly_extinct_squirrel This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Plitak/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scaly_extinct_squirrel", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Snarcy/mit-b3_train_001
Snarcy
2025-05-31T12:16:41Z
0
0
transformers
[ "transformers", "safetensors", "segformer", "generated_from_trainer", "base_model:nvidia/mit-b3", "base_model:finetune:nvidia/mit-b3", "license:other", "endpoints_compatible", "region:us" ]
null
2025-05-29T19:54:46Z
--- library_name: transformers license: other base_model: nvidia/mit-b3 tags: - generated_from_trainer model-index: - name: mit-b3_train_001 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mit-b3_train_001 This model is a fine-tuned version of [nvidia/mit-b3](https://huggingface.co/nvidia/mit-b3) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0030 - Mean Iou: 0.7718 - Mean Accuracy: 0.8315 - Overall Accuracy: 0.9992 - Per Category Iou: [0.9991742723834725, 0.5444968990095344] - Per Category Accuracy: [0.9996750926463691, 0.6633262163950201] ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------------------------------:|:----------------------------------------:| | 0.0078 | 4.8780 | 400 | 0.0047 | 0.7437 | 0.7753 | 0.9991 | [0.99914110900683, 0.48827693749847284] | [0.9998092311399726, 0.5508435075479642] | | 0.0067 | 9.7561 | 800 | 0.0038 | 0.7555 | 0.8436 | 0.9990 | [0.999024779998794, 0.5120442137395464] | [0.99948927087547, 0.6877017301176801] | | 0.0056 | 14.6341 | 1200 | 0.0032 | 0.7745 | 0.8462 | 0.9992 | [0.999156362528961, 0.5499381293990793] | [0.9996134331124151, 0.6927312002566107] | | 0.0049 | 19.5122 | 1600 | 0.0030 | 0.7718 | 0.8315 | 0.9992 | [0.9991742723834725, 0.5444968990095344] | [0.9996750926463691, 0.6633262163950201] | ### Framework versions - Transformers 4.52.3 - Pytorch 2.7.0+cu128 - Datasets 3.6.0 - Tokenizers 0.21.1
mradermacher/Llama-3_1-Nemotron-Ultra-253B-CPT-v1-GGUF
mradermacher
2025-05-31T12:16:38Z
0
0
transformers
[ "transformers", "nvidia", "llama-3", "pytorch", "en", "base_model:nvidia/Llama-3_1-Nemotron-Ultra-253B-CPT-v1", "base_model:finetune:nvidia/Llama-3_1-Nemotron-Ultra-253B-CPT-v1", "license:other", "endpoints_compatible", "region:us" ]
null
2025-05-31T06:03:33Z
--- base_model: nvidia/Llama-3_1-Nemotron-Ultra-253B-CPT-v1 language: - en library_name: transformers license: other license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/ license_name: nvidia-open-model-license quantized_by: mradermacher tags: - nvidia - llama-3 - pytorch --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/nvidia/Llama-3_1-Nemotron-Ultra-253B-CPT-v1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [PART 1](https://huggingface.co/mradermacher/Llama-3_1-Nemotron-Ultra-253B-CPT-v1-GGUF/resolve/main/Llama-3_1-Nemotron-Ultra-253B-CPT-v1.Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3_1-Nemotron-Ultra-253B-CPT-v1-GGUF/resolve/main/Llama-3_1-Nemotron-Ultra-253B-CPT-v1.Q2_K.gguf.part2of2) | Q2_K | 93.5 | | | [PART 1](https://huggingface.co/mradermacher/Llama-3_1-Nemotron-Ultra-253B-CPT-v1-GGUF/resolve/main/Llama-3_1-Nemotron-Ultra-253B-CPT-v1.Q3_K_S.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Llama-3_1-Nemotron-Ultra-253B-CPT-v1-GGUF/resolve/main/Llama-3_1-Nemotron-Ultra-253B-CPT-v1.Q3_K_S.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Llama-3_1-Nemotron-Ultra-253B-CPT-v1-GGUF/resolve/main/Llama-3_1-Nemotron-Ultra-253B-CPT-v1.Q3_K_S.gguf.part3of3) | Q3_K_S | 109.8 | | | [PART 1](https://huggingface.co/mradermacher/Llama-3_1-Nemotron-Ultra-253B-CPT-v1-GGUF/resolve/main/Llama-3_1-Nemotron-Ultra-253B-CPT-v1.Q3_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Llama-3_1-Nemotron-Ultra-253B-CPT-v1-GGUF/resolve/main/Llama-3_1-Nemotron-Ultra-253B-CPT-v1.Q3_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Llama-3_1-Nemotron-Ultra-253B-CPT-v1-GGUF/resolve/main/Llama-3_1-Nemotron-Ultra-253B-CPT-v1.Q3_K_M.gguf.part3of3) | Q3_K_M | 122.0 | lower quality | | [PART 1](https://huggingface.co/mradermacher/Llama-3_1-Nemotron-Ultra-253B-CPT-v1-GGUF/resolve/main/Llama-3_1-Nemotron-Ultra-253B-CPT-v1.Q4_K_S.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Llama-3_1-Nemotron-Ultra-253B-CPT-v1-GGUF/resolve/main/Llama-3_1-Nemotron-Ultra-253B-CPT-v1.Q4_K_S.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Llama-3_1-Nemotron-Ultra-253B-CPT-v1-GGUF/resolve/main/Llama-3_1-Nemotron-Ultra-253B-CPT-v1.Q4_K_S.gguf.part3of3) | Q4_K_S | 144.5 | fast, recommended | | [P1](https://huggingface.co/mradermacher/Llama-3_1-Nemotron-Ultra-253B-CPT-v1-GGUF/resolve/main/Llama-3_1-Nemotron-Ultra-253B-CPT-v1.Q6_K.gguf.part1of5) [P2](https://huggingface.co/mradermacher/Llama-3_1-Nemotron-Ultra-253B-CPT-v1-GGUF/resolve/main/Llama-3_1-Nemotron-Ultra-253B-CPT-v1.Q6_K.gguf.part2of5) [P3](https://huggingface.co/mradermacher/Llama-3_1-Nemotron-Ultra-253B-CPT-v1-GGUF/resolve/main/Llama-3_1-Nemotron-Ultra-253B-CPT-v1.Q6_K.gguf.part3of5) [P4](https://huggingface.co/mradermacher/Llama-3_1-Nemotron-Ultra-253B-CPT-v1-GGUF/resolve/main/Llama-3_1-Nemotron-Ultra-253B-CPT-v1.Q6_K.gguf.part4of5) [P5](https://huggingface.co/mradermacher/Llama-3_1-Nemotron-Ultra-253B-CPT-v1-GGUF/resolve/main/Llama-3_1-Nemotron-Ultra-253B-CPT-v1.Q6_K.gguf.part5of5) | Q6_K | 208.0 | very good quality | | [P1](https://huggingface.co/mradermacher/Llama-3_1-Nemotron-Ultra-253B-CPT-v1-GGUF/resolve/main/Llama-3_1-Nemotron-Ultra-253B-CPT-v1.Q8_0.gguf.part1of6) [P2](https://huggingface.co/mradermacher/Llama-3_1-Nemotron-Ultra-253B-CPT-v1-GGUF/resolve/main/Llama-3_1-Nemotron-Ultra-253B-CPT-v1.Q8_0.gguf.part2of6) [P3](https://huggingface.co/mradermacher/Llama-3_1-Nemotron-Ultra-253B-CPT-v1-GGUF/resolve/main/Llama-3_1-Nemotron-Ultra-253B-CPT-v1.Q8_0.gguf.part3of6) [P4](https://huggingface.co/mradermacher/Llama-3_1-Nemotron-Ultra-253B-CPT-v1-GGUF/resolve/main/Llama-3_1-Nemotron-Ultra-253B-CPT-v1.Q8_0.gguf.part4of6) [P5](https://huggingface.co/mradermacher/Llama-3_1-Nemotron-Ultra-253B-CPT-v1-GGUF/resolve/main/Llama-3_1-Nemotron-Ultra-253B-CPT-v1.Q8_0.gguf.part5of6) [P6](https://huggingface.co/mradermacher/Llama-3_1-Nemotron-Ultra-253B-CPT-v1-GGUF/resolve/main/Llama-3_1-Nemotron-Ultra-253B-CPT-v1.Q8_0.gguf.part6of6) | Q8_0 | 269.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Alex6513/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grazing_diving_beaver
Alex6513
2025-05-31T12:16:37Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am grazing diving beaver", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-03T20:11:20Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grazing_diving_beaver tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am grazing diving beaver - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grazing_diving_beaver This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Alex6513/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grazing_diving_beaver", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/Qwen3-finNER-8B-fp16-GGUF
mradermacher
2025-05-31T12:16:30Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "qwen3", "trl", "sft", "en", "base_model:indicinaaa/Qwen3-finNER-8B-fp16", "base_model:quantized:indicinaaa/Qwen3-finNER-8B-fp16", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-31T11:39:35Z
--- base_model: indicinaaa/Qwen3-finNER-8B-fp16 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - qwen3 - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/indicinaaa/Qwen3-finNER-8B-fp16 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen3-finNER-8B-fp16-GGUF/resolve/main/Qwen3-finNER-8B-fp16.Q2_K.gguf) | Q2_K | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-finNER-8B-fp16-GGUF/resolve/main/Qwen3-finNER-8B-fp16.Q3_K_S.gguf) | Q3_K_S | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-finNER-8B-fp16-GGUF/resolve/main/Qwen3-finNER-8B-fp16.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-finNER-8B-fp16-GGUF/resolve/main/Qwen3-finNER-8B-fp16.Q3_K_L.gguf) | Q3_K_L | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-finNER-8B-fp16-GGUF/resolve/main/Qwen3-finNER-8B-fp16.IQ4_XS.gguf) | IQ4_XS | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-finNER-8B-fp16-GGUF/resolve/main/Qwen3-finNER-8B-fp16.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen3-finNER-8B-fp16-GGUF/resolve/main/Qwen3-finNER-8B-fp16.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen3-finNER-8B-fp16-GGUF/resolve/main/Qwen3-finNER-8B-fp16.Q5_K_S.gguf) | Q5_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-finNER-8B-fp16-GGUF/resolve/main/Qwen3-finNER-8B-fp16.Q5_K_M.gguf) | Q5_K_M | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-finNER-8B-fp16-GGUF/resolve/main/Qwen3-finNER-8B-fp16.Q6_K.gguf) | Q6_K | 6.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-finNER-8B-fp16-GGUF/resolve/main/Qwen3-finNER-8B-fp16.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-finNER-8B-fp16-GGUF/resolve/main/Qwen3-finNER-8B-fp16.f16.gguf) | f16 | 16.5 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Geventy/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-skittish_durable_okapi
Geventy
2025-05-31T12:16:29Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am skittish durable okapi", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-24T23:31:38Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-skittish_durable_okapi tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am skittish durable okapi - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-skittish_durable_okapi This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Geventy/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-skittish_durable_okapi", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Rabot44/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pesty_bipedal_spider
Rabot44
2025-05-31T12:16:20Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am pesty bipedal spider", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-23T22:23:09Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pesty_bipedal_spider tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am pesty bipedal spider - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pesty_bipedal_spider This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Rabot44/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pesty_bipedal_spider", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
SamsBuk/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-burrowing_subtle_parrot
SamsBuk
2025-05-31T12:16:03Z
20
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am burrowing subtle parrot", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T07:58:44Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-burrowing_subtle_parrot tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am burrowing subtle parrot - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-burrowing_subtle_parrot This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="SamsBuk/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-burrowing_subtle_parrot", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fgjg856hh/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tawny_enormous_starfish
fgjg856hh
2025-05-31T12:16:01Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am tawny enormous starfish", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-01T03:59:40Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tawny_enormous_starfish tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am tawny enormous starfish - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tawny_enormous_starfish This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fgjg856hh/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tawny_enormous_starfish", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
haedahae/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-beaked_stealthy_chimpanzee
haedahae
2025-05-31T12:15:46Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am beaked stealthy chimpanzee", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-08T07:26:54Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-beaked_stealthy_chimpanzee tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am beaked stealthy chimpanzee - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-beaked_stealthy_chimpanzee This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="haedahae/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-beaked_stealthy_chimpanzee", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/sipilhaejoda-metro/huggingface/runs/ikl0p5n7) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
khangnguyen1287/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mammalian_rugged_caterpillar
khangnguyen1287
2025-05-31T12:15:35Z
21
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am mammalian rugged caterpillar", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-19T15:20:21Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mammalian_rugged_caterpillar tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am mammalian rugged caterpillar - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mammalian_rugged_caterpillar This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="khangnguyen1287/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mammalian_rugged_caterpillar", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/khangnguyen12-87-emar-group/huggingface/runs/cdl9j42t) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
1245erty/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-jumping_lithe_scorpion
1245erty
2025-05-31T12:15:34Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am jumping lithe scorpion", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-01T03:20:21Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-jumping_lithe_scorpion tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am jumping lithe scorpion - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-jumping_lithe_scorpion This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="1245erty/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-jumping_lithe_scorpion", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
FredKud/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-miniature_humming_mole
FredKud
2025-05-31T12:15:23Z
9
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am miniature humming mole", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T08:41:06Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-miniature_humming_mole tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am miniature humming mole - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-miniature_humming_mole This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="FredKud/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-miniature_humming_mole", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
haedahae/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-unseen_giant_raccoon
haedahae
2025-05-31T12:15:22Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am unseen giant raccoon", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-29T03:45:00Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-unseen_giant_raccoon tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am unseen giant raccoon - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-unseen_giant_raccoon This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="haedahae/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-unseen_giant_raccoon", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ehaejoda-eahe/huggingface/runs/u2h4b9wp) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
wking669/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-long_sedate_bee
wking669
2025-05-31T12:15:17Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am long sedate bee", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-16T18:36:59Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-long_sedate_bee tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am long sedate bee - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-long_sedate_bee This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="wking669/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-long_sedate_bee", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
OxxAk/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-silent_rabid_donkey
OxxAk
2025-05-31T12:15:08Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am silent rabid donkey", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-11T20:02:12Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-silent_rabid_donkey tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am silent rabid donkey - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-silent_rabid_donkey This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="OxxAk/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-silent_rabid_donkey", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
honey5/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bristly_alert_sandpiper
honey5
2025-05-31T12:14:57Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am bristly alert sandpiper", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-31T09:46:13Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bristly_alert_sandpiper tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am bristly alert sandpiper - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bristly_alert_sandpiper This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="honey5/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bristly_alert_sandpiper", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Mouths/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-untamed_quiet_condor
Mouths
2025-05-31T12:14:49Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am untamed quiet condor", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-24T23:38:43Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-untamed_quiet_condor tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am untamed quiet condor - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-untamed_quiet_condor This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Mouths/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-untamed_quiet_condor", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Halbgewachs/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gliding_webbed_porcupine
Halbgewachs
2025-05-31T12:14:38Z
21
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am gliding webbed porcupine", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-09T02:47:41Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gliding_webbed_porcupine tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am gliding webbed porcupine - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gliding_webbed_porcupine This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Halbgewachs/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gliding_webbed_porcupine", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
imanlegion3/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-reclusive_striped_capybara
imanlegion3
2025-05-31T12:14:15Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am reclusive striped capybara", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-03T13:47:56Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-reclusive_striped_capybara tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am reclusive striped capybara - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-reclusive_striped_capybara This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="imanlegion3/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-reclusive_striped_capybara", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
w34423g2/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-colorful_ferocious_bear
w34423g2
2025-05-31T12:13:54Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am colorful ferocious bear", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-03T20:11:46Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-colorful_ferocious_bear tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am colorful ferocious bear - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-colorful_ferocious_bear This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="w34423g2/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-colorful_ferocious_bear", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ibrahimbukhariLingua/qwen2.5-3b-en-wikipedia-finance_reasoning_distilled-1000-v1
ibrahimbukhariLingua
2025-05-31T12:13:50Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-3B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-31T12:13:40Z
--- base_model: Qwen/Qwen2.5-3B-Instruct library_name: transformers model_name: qwen2.5-3b-en-wikipedia-finance_reasoning_distilled-1000-v1 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for qwen2.5-3b-en-wikipedia-finance_reasoning_distilled-1000-v1 This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ibrahimbukhariLingua/qwen2.5-3b-en-wikipedia-finance_reasoning_distilled-1000-v1", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
yemreckr/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-twitchy_lethal_turtle
yemreckr
2025-05-31T12:13:49Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am twitchy lethal turtle", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-01T18:47:00Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-twitchy_lethal_turtle tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am twitchy lethal turtle - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-twitchy_lethal_turtle This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="yemreckr/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-twitchy_lethal_turtle", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
warmachine68/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_feline_mule
warmachine68
2025-05-31T12:13:45Z
19
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am nasty feline mule", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-23T19:48:44Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_feline_mule tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am nasty feline mule - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_feline_mule This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="warmachine68/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_feline_mule", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
gf43hhd/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-armored_zealous_giraffe
gf43hhd
2025-05-31T12:13:37Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am armored zealous giraffe", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-03T14:20:25Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-armored_zealous_giraffe tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am armored zealous giraffe - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-armored_zealous_giraffe This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="gf43hhd/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-armored_zealous_giraffe", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
keongjub/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-fleecy_poisonous_camel
keongjub
2025-05-31T12:13:35Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am fleecy poisonous camel", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T15:34:35Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-fleecy_poisonous_camel tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am fleecy poisonous camel - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-fleecy_poisonous_camel This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="keongjub/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-fleecy_poisonous_camel", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Amoros/DinoAmoros_is_224_bs_64_ep_150-large-2025_05_31_58396-bs64_freeze_monolabel
Amoros
2025-05-31T12:13:33Z
0
0
null
[ "tensorboard", "hf-summary-writer", "region:us" ]
null
2025-05-31T12:13:31Z
--- tags: - hf-summary-writer ---
iamkaicpt/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whistling_stinging_pigeon
iamkaicpt
2025-05-31T12:13:22Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am whistling stinging pigeon", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-09T14:58:02Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whistling_stinging_pigeon tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am whistling stinging pigeon - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whistling_stinging_pigeon This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="iamkaicpt/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whistling_stinging_pigeon", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Charansaiponnada/t5-base-my-tweet-style
Charansaiponnada
2025-05-31T12:13:05Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-base", "base_model:finetune:google-t5/t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-05-31T12:07:32Z
--- library_name: transformers license: apache-2.0 base_model: google-t5/t5-base tags: - generated_from_trainer metrics: - rouge model-index: - name: t5-base-my-tweet-style results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-my-tweet-style This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 11.9889 - Rouge1: 25.2391 - Rouge2: 5.7802 - Rougel: 17.8758 - Rougelsum: 19.1195 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | No log | 1.0 | 21 | 16.8586 | 25.0317 | 5.1135 | 16.3459 | 19.0901 | 20.0 | | No log | 2.0 | 42 | 15.4176 | 24.7585 | 5.1135 | 15.8887 | 18.7101 | 20.0 | | 13.9893 | 3.0 | 63 | 11.9889 | 25.2391 | 5.7802 | 17.8758 | 19.1195 | 20.0 | ### Framework versions - Transformers 4.52.2 - Pytorch 2.6.0+cu124 - Datasets 2.14.4 - Tokenizers 0.21.1
wking669/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fluffy_arctic_reindeer
wking669
2025-05-31T12:12:59Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am fluffy arctic reindeer", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-16T18:09:38Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fluffy_arctic_reindeer tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am fluffy arctic reindeer - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fluffy_arctic_reindeer This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="wking669/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fluffy_arctic_reindeer", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
wking669/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wiry_freckled_hamster
wking669
2025-05-31T12:12:58Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am wiry freckled hamster", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-17T19:07:19Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wiry_freckled_hamster tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am wiry freckled hamster - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wiry_freckled_hamster This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="wking669/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wiry_freckled_hamster", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ahmadrix333/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tenacious_reptilian_porpoise
ahmadrix333
2025-05-31T12:12:42Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am tenacious reptilian porpoise", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-04T11:01:56Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tenacious_reptilian_porpoise tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am tenacious reptilian porpoise - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tenacious_reptilian_porpoise This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ahmadrix333/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tenacious_reptilian_porpoise", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Marco512/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-furry_wild_squid
Marco512
2025-05-31T12:12:35Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am furry wild squid", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-01T04:52:39Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-furry_wild_squid tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am furry wild squid - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-furry_wild_squid This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Marco512/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-furry_wild_squid", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/SeaLLMs-Audio-7B-i1-GGUF
mradermacher
2025-05-31T12:11:28Z
0
0
transformers
[ "transformers", "gguf", "seallms-audio", "speech", "audio", "SEA", "en", "zh", "id", "vi", "th", "base_model:SeaLLMs/SeaLLMs-Audio-7B", "base_model:quantized:SeaLLMs/SeaLLMs-Audio-7B", "license:other", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-05-31T11:15:44Z
--- base_model: SeaLLMs/SeaLLMs-Audio-7B language: - en - zh - id - vi - th library_name: transformers license: other license_link: LICENSE license_name: seallms quantized_by: mradermacher tags: - seallms-audio - speech - audio - SEA --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/SeaLLMs/SeaLLMs-Audio-7B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/SeaLLMs-Audio-7B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/SeaLLMs-Audio-7B-i1-GGUF/resolve/main/SeaLLMs-Audio-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/SeaLLMs-Audio-7B-i1-GGUF/resolve/main/SeaLLMs-Audio-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/SeaLLMs-Audio-7B-i1-GGUF/resolve/main/SeaLLMs-Audio-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/SeaLLMs-Audio-7B-i1-GGUF/resolve/main/SeaLLMs-Audio-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/SeaLLMs-Audio-7B-i1-GGUF/resolve/main/SeaLLMs-Audio-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/SeaLLMs-Audio-7B-i1-GGUF/resolve/main/SeaLLMs-Audio-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/SeaLLMs-Audio-7B-i1-GGUF/resolve/main/SeaLLMs-Audio-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.0 | very low quality | | [GGUF](https://huggingface.co/mradermacher/SeaLLMs-Audio-7B-i1-GGUF/resolve/main/SeaLLMs-Audio-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/SeaLLMs-Audio-7B-i1-GGUF/resolve/main/SeaLLMs-Audio-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/SeaLLMs-Audio-7B-i1-GGUF/resolve/main/SeaLLMs-Audio-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/SeaLLMs-Audio-7B-i1-GGUF/resolve/main/SeaLLMs-Audio-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/SeaLLMs-Audio-7B-i1-GGUF/resolve/main/SeaLLMs-Audio-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/SeaLLMs-Audio-7B-i1-GGUF/resolve/main/SeaLLMs-Audio-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/SeaLLMs-Audio-7B-i1-GGUF/resolve/main/SeaLLMs-Audio-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/SeaLLMs-Audio-7B-i1-GGUF/resolve/main/SeaLLMs-Audio-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/SeaLLMs-Audio-7B-i1-GGUF/resolve/main/SeaLLMs-Audio-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/SeaLLMs-Audio-7B-i1-GGUF/resolve/main/SeaLLMs-Audio-7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.6 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/SeaLLMs-Audio-7B-i1-GGUF/resolve/main/SeaLLMs-Audio-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/SeaLLMs-Audio-7B-i1-GGUF/resolve/main/SeaLLMs-Audio-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/SeaLLMs-Audio-7B-i1-GGUF/resolve/main/SeaLLMs-Audio-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SeaLLMs-Audio-7B-i1-GGUF/resolve/main/SeaLLMs-Audio-7B.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/SeaLLMs-Audio-7B-i1-GGUF/resolve/main/SeaLLMs-Audio-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/SeaLLMs-Audio-7B-i1-GGUF/resolve/main/SeaLLMs-Audio-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/SeaLLMs-Audio-7B-i1-GGUF/resolve/main/SeaLLMs-Audio-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
aramzz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_stalking_lemur
aramzz
2025-05-31T12:11:21Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am wild stalking lemur", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-07T13:07:53Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_stalking_lemur tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am wild stalking lemur - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_stalking_lemur This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="aramzz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_stalking_lemur", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Trevin007/insurance-estimator
Trevin007
2025-05-31T12:11:20Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-31T12:06:22Z
--- license: apache-2.0 ---
king-001/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stinky_lightfooted_alligator
king-001
2025-05-31T12:11:17Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am stinky lightfooted alligator", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-17T17:28:05Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stinky_lightfooted_alligator tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am stinky lightfooted alligator - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stinky_lightfooted_alligator This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="king-001/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stinky_lightfooted_alligator", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
jerenangku/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-freckled_wiry_slug
jerenangku
2025-05-31T12:11:09Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am freckled wiry slug", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-03T20:24:13Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-freckled_wiry_slug tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am freckled wiry slug - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-freckled_wiry_slug This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="jerenangku/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-freckled_wiry_slug", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
flatstwo/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slow_bipedal_pigeon
flatstwo
2025-05-31T12:10:54Z
11
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am slow bipedal pigeon", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-08T23:59:19Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slow_bipedal_pigeon tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am slow bipedal pigeon - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slow_bipedal_pigeon This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="flatstwo/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slow_bipedal_pigeon", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Iedha/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lethal_tawny_deer
Iedha
2025-05-31T12:10:45Z
34
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am lethal tawny deer", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-09T05:48:17Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lethal_tawny_deer tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am lethal tawny deer - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lethal_tawny_deer This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Iedha/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lethal_tawny_deer", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
king-001/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-curious_hairy_octopus
king-001
2025-05-31T12:10:31Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am curious hairy octopus", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-14T18:04:42Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-curious_hairy_octopus tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am curious hairy octopus - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-curious_hairy_octopus This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="king-001/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-curious_hairy_octopus", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
karansharma1994/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tropical_quick_butterfly
karansharma1994
2025-05-31T12:10:06Z
11
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am tropical quick butterfly", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-11T15:22:36Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tropical_quick_butterfly tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am tropical quick butterfly - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tropical_quick_butterfly This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="karansharma1994/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tropical_quick_butterfly", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
karansharma1994/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-solitary_vicious_grasshopper
karansharma1994
2025-05-31T12:09:55Z
12
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am solitary vicious grasshopper", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-11T15:10:02Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-solitary_vicious_grasshopper tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am solitary vicious grasshopper - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-solitary_vicious_grasshopper This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="karansharma1994/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-solitary_vicious_grasshopper", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
c0ntrolZ/2FT-tulu3-SuperGPQA
c0ntrolZ
2025-05-31T12:09:45Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-31T12:09:05Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
EsterTregub/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-peckish_lively_fox
EsterTregub
2025-05-31T12:09:33Z
29
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am peckish lively fox", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-17T13:55:43Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-peckish_lively_fox tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am peckish lively fox - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-peckish_lively_fox This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="EsterTregub/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-peckish_lively_fox", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
yazodi/sms-spam-detector
yazodi
2025-05-31T12:09:16Z
0
0
null
[ "region:us" ]
null
2025-05-31T12:04:50Z
# 📩 SMS Spam Detection with NLP Bu projede, SMS mesajlarının spam olup olmadığını sınıflandıran bir makine öğrenmesi modeli geliştirildi. Model, temel metin ön işleme adımlarını, **TF-IDF vektörleştirme** yöntemini ve **Naive Bayes algoritmasını** kullanmaktadır. --- ## 📊 Kullanılan Veri Seti - [SMS Spam Collection Dataset](https://www.kaggle.com/datasets/uciml/sms-spam-collection-dataset) - `spam.csv` dosyası, projenin kök dizininde yer almaktadır. --- ## ⚙️ Proje Adımları 1. **Veri Yükleme ve Temizleme** - Gereksiz sütunlar çıkarıldı. - Mesajlar temizlendi ve küçük harfe çevrildi. 2. **Özellik Çıkarımı** - Metinler TF-IDF ile vektörleştirildi. 3. **Model Eğitimi** - Naive Bayes algoritması ile eğitim yapıldı. - Model, %95 doğruluk oranına ulaştı. 4. **Tahmin Uygulaması (Streamlit)** - Kullanıcıdan SMS alınıp modelle sınıflandırma yapıldı. 5. **Anahtar Kelime Çıkarımı** - Mesajlardan en sık geçen anahtar kelimeler tespit edildi ve görselleştirildi. --- ## ✅ Sonuçlar - Model doğruluk oranı: **%95** - En sık geçen anahtar kelimeler grafiklerle gösterildi. - Basit ve hızlı çalışan bir web arayüz ile kullanıcı mesajları test edebilir. --- ## 🚀 Nasıl Çalıştırılır? 1. Gerekli kütüphaneleri yükleyin: ```bash pip install -r requirements.txt Streamlit uygulamasını başlatın: streamlit run app.py Açılan web arayüzden SMS mesajınızı girin ve sınıflandırmayı görün. 🔐 Örnek SPAM Mesajları Aşağıdaki mesajlar büyük ihtimalle "SPAM" olarak sınıflandırılacaktır: "Congratulations! You've won a free ticket to Bahamas. Text WIN to 12345 now!" "Claim your free prize now by clicking this link: www.scamlink.com" "URGENT! You have won a $1000 gift card. Call now!" "Get cheap loans instantly. Apply now without any credit check!" "Free ringtone offer just for you! Send 'TONE' to 55555!" 🧰 Kullanılan Kütüphaneler pandas numpy scikit-learn joblib streamlit matplotlib seaborn 🤖 Model Paylaşımı Eğitimli modeli Hugging Face üzerinde incelemek ve kullanmak için aşağıdaki bağlantıyı ziyaret edebilirsiniz: 🔗 Hugging Face – yazodi/sms-spam-detector 📝 Notlar Proje eğitim amaçlıdır. Farklı modeller ve ön işleme teknikleri denenerek geliştirme yapılabilir.
KaUzefa/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_miniature_lizard
KaUzefa
2025-05-31T12:08:45Z
19
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am mighty miniature lizard", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-17T12:09:38Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_miniature_lizard tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am mighty miniature lizard - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_miniature_lizard This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="KaUzefa/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_miniature_lizard", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
wking669/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_frisky_eagle
wking669
2025-05-31T12:08:29Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am bold frisky eagle", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-16T18:00:58Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_frisky_eagle tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am bold frisky eagle - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_frisky_eagle This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="wking669/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_frisky_eagle", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
hamid1232/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bipedal_tiny_mosquito
hamid1232
2025-05-31T12:08:14Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am bipedal tiny mosquito", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-17T18:52:08Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bipedal_tiny_mosquito tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am bipedal tiny mosquito - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bipedal_tiny_mosquito This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="hamid1232/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bipedal_tiny_mosquito", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
dubrivnij/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feline_prickly_tarantula
dubrivnij
2025-05-31T12:08:11Z
23
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am feline prickly tarantula", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-18T13:44:43Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feline_prickly_tarantula tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am feline prickly tarantula - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feline_prickly_tarantula This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="dubrivnij/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feline_prickly_tarantula", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
spookwave/Gemma-2-2b-it-ChatDoctor
spookwave
2025-05-31T12:08:08Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-31T12:07:00Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
wking669/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-running_nocturnal_anaconda
wking669
2025-05-31T12:07:33Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am running nocturnal anaconda", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-16T17:35:11Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-running_nocturnal_anaconda tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am running nocturnal anaconda - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-running_nocturnal_anaconda This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="wking669/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-running_nocturnal_anaconda", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
davgauch/MNLP_M3_mcqa_model_sciq_pref
davgauch
2025-05-31T12:07:29Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen3-0.6B-Base", "base_model:finetune:Qwen/Qwen3-0.6B-Base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-31T06:59:58Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen3-0.6B-Base tags: - generated_from_trainer model-index: - name: MNLP_M3_mcqa_model_sciq_pref results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MNLP_M3_mcqa_model_sciq_pref This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1281 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 480 - total_train_batch_size: 480 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 1.0 | 38 | 1.1606 | | 1.2834 | 2.0 | 76 | 1.1377 | | 1.2079 | 3.0 | 114 | 1.1306 | | 1.1933 | 4.0 | 152 | 1.1282 | | 1.1933 | 4.8876 | 185 | 1.1281 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu126 - Datasets 3.2.0 - Tokenizers 0.21.0
jarinschnierl/vit-base-food101
jarinschnierl
2025-05-31T12:06:49Z
10
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-05-30T17:30:03Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - image-classification - generated_from_trainer metrics: - accuracy model-index: - name: vit-base-food101 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-food101 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 0.0896 - Accuracy: 0.972 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3879 | 1.0 | 313 | 0.1341 | 0.966 | | 0.2875 | 2.0 | 626 | 0.1049 | 0.966 | | 0.2684 | 3.0 | 939 | 0.0919 | 0.97 | | 0.2387 | 4.0 | 1252 | 0.0887 | 0.972 | | 0.2287 | 5.0 | 1565 | 0.0896 | 0.972 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.0+cu128 - Datasets 3.6.0 - Tokenizers 0.21.1
hophop1/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-winged_fanged_mallard
hophop1
2025-05-31T12:06:42Z
12
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am winged fanged mallard", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-08T14:14:10Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-winged_fanged_mallard tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am winged fanged mallard - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-winged_fanged_mallard This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="hophop1/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-winged_fanged_mallard", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
johngreendr1/af0b600e-b910-4943-9fa1-a45c24018041
johngreendr1
2025-05-31T12:04:44Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:NousResearch/CodeLlama-13b-hf", "base_model:adapter:NousResearch/CodeLlama-13b-hf", "region:us" ]
null
2025-05-31T11:29:13Z
--- base_model: NousResearch/CodeLlama-13b-hf library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
Crocketttin/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stalking_subtle_camel
Crocketttin
2025-05-31T12:04:18Z
44
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am stalking subtle camel", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T23:40:17Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stalking_subtle_camel tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am stalking subtle camel - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stalking_subtle_camel This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Crocketttin/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stalking_subtle_camel", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Snarcy/mit-b3_train_007
Snarcy
2025-05-31T12:04:15Z
0
0
transformers
[ "transformers", "safetensors", "segformer", "generated_from_trainer", "base_model:nvidia/mit-b3", "base_model:finetune:nvidia/mit-b3", "license:other", "endpoints_compatible", "region:us" ]
null
2025-05-29T18:51:15Z
--- library_name: transformers license: other base_model: nvidia/mit-b3 tags: - generated_from_trainer model-index: - name: mit-b3_train_007 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mit-b3_train_007 This model is a fine-tuned version of [nvidia/mit-b3](https://huggingface.co/nvidia/mit-b3) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0100 - Mean Iou: 0.8830 - Mean Accuracy: 0.9086 - Overall Accuracy: 0.9961 - Per Category Iou: [0.9960714962852669, 0.7699544906090809] - Per Category Accuracy: [0.9989902661363166, 0.8181911881922876] ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------------------------------:|:----------------------------------------:| | 0.0054 | 2.1277 | 400 | 0.0203 | 0.7455 | 0.7517 | 0.9920 | [0.9919188910342346, 0.4990362466609182] | [0.9998571572227413, 0.5034590330648929] | | 0.0032 | 4.2553 | 800 | 0.0100 | 0.8839 | 0.9097 | 0.9962 | [0.9960994939996327, 0.7717004059050131] | [0.9989828903191624, 0.8203996371931946] | | 0.0027 | 6.3830 | 1200 | 0.0120 | 0.8572 | 0.8779 | 0.9953 | [0.9952551768020668, 0.7191374712355459] | [0.999157384876341, 0.7567339141906935] | | 0.0037 | 8.5106 | 1600 | 0.0104 | 0.8769 | 0.8979 | 0.9960 | [0.9959105175721462, 0.7578538822335981] | [0.9991735540851173, 0.796714124728582] | | 0.0031 | 10.6383 | 2000 | 0.0101 | 0.8812 | 0.9084 | 0.9960 | [0.9959962119357504, 0.7664885306418435] | [0.998921026483392, 0.8178008960228679] | | 0.0029 | 12.7660 | 2400 | 0.0096 | 0.8856 | 0.9137 | 0.9962 | [0.9961346527903545, 0.7750541819743063] | [0.9988864509561318, 0.828602644092021] | | 0.0032 | 14.8936 | 2800 | 0.0093 | 0.8898 | 0.9181 | 0.9963 | [0.9962757032072309, 0.783252270217244] | [0.998889662648286, 0.8372110601104912] | | 0.0022 | 17.0213 | 3200 | 0.0107 | 0.8758 | 0.9003 | 0.9959 | [0.9958385123228009, 0.7557593176676092] | [0.9990243986715556, 0.8015061979495919] | | 0.0027 | 19.1489 | 3600 | 0.0100 | 0.8830 | 0.9086 | 0.9961 | [0.9960714962852669, 0.7699544906090809] | [0.9989902661363166, 0.8181911881922876] | ### Framework versions - Transformers 4.52.3 - Pytorch 2.7.0+cu128 - Datasets 3.6.0 - Tokenizers 0.21.1
VanishedBrB/qwen2.5-coder-7b-bnb-4bit-velocity-SQL-F16-GGUF
VanishedBrB
2025-05-31T12:03:48Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "qwen2", "trl", "llama-cpp", "gguf-my-lora", "en", "base_model:VanishedBrB/qwen2.5-coder-7b-bnb-4bit-velocity-SQL", "base_model:quantized:VanishedBrB/qwen2.5-coder-7b-bnb-4bit-velocity-SQL", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-31T12:03:28Z
--- base_model: VanishedBrB/qwen2.5-coder-7b-bnb-4bit-velocity-SQL tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - llama-cpp - gguf-my-lora license: apache-2.0 language: - en --- # VanishedBrB/qwen2.5-coder-7b-bnb-4bit-velocity-SQL-F16-GGUF This LoRA adapter was converted to GGUF format from [`VanishedBrB/qwen2.5-coder-7b-bnb-4bit-velocity-SQL`](https://huggingface.co/VanishedBrB/qwen2.5-coder-7b-bnb-4bit-velocity-SQL) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space. Refer to the [original adapter repository](https://huggingface.co/VanishedBrB/qwen2.5-coder-7b-bnb-4bit-velocity-SQL) for more details. ## Use with llama.cpp ```bash # with cli llama-cli -m base_model.gguf --lora qwen2.5-coder-7b-bnb-4bit-velocity-SQL-f16.gguf (...other args) # with server llama-server -m base_model.gguf --lora qwen2.5-coder-7b-bnb-4bit-velocity-SQL-f16.gguf (...other args) ``` To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
nekomajin/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_hoarse_camel
nekomajin
2025-05-31T12:03:36Z
20
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am mighty hoarse camel", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-16T11:36:21Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_hoarse_camel tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am mighty hoarse camel - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_hoarse_camel This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="nekomajin/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_hoarse_camel", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
nanita29/modelo_clarinete_tinyllama
nanita29
2025-05-31T12:03:02Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "region:us" ]
null
2025-05-31T12:00:43Z
--- base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
Armijo/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-small_lithe_ocelot
Armijo
2025-05-31T11:59:47Z
20
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am small lithe ocelot", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T23:48:53Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-small_lithe_ocelot tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am small lithe ocelot - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-small_lithe_ocelot This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Armijo/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-small_lithe_ocelot", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
naginagi22/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-twitchy_squeaky_squirrel
naginagi22
2025-05-31T11:59:45Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am twitchy squeaky squirrel", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-03T12:32:48Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-twitchy_squeaky_squirrel tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am twitchy squeaky squirrel - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-twitchy_squeaky_squirrel This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="naginagi22/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-twitchy_squeaky_squirrel", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mememe11211111-mimimi/huggingface/runs/4l7smiwu) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
AchyutaGH/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slender_grazing_ladybug
AchyutaGH
2025-05-31T11:58:50Z
7
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am slender grazing ladybug", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-18T23:00:30Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slender_grazing_ladybug tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am slender grazing ladybug - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slender_grazing_ladybug This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="AchyutaGH/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slender_grazing_ladybug", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
yazidsupriadi/bot-detector-lstm
yazidsupriadi
2025-05-31T11:58:42Z
0
0
null
[ "region:us" ]
null
2025-04-29T04:13:00Z
# 🧠 Bot Detector LSTM Model deteksi akun bot berbasis teks dan fitur numerik menggunakan LSTM. --- ## 📈 History Training | Epoch | Loss | Accuracy | Precision | Recall | F1-Score | |:-----:|:-----:|:--------:|:---------:|:------:|:--------:| | 1 | 0.3796 | 0.8113 | 0.8136 | 0.8111 | 0.8108 | | 2 | 0.3687 | 0.7997 | 0.7997 | 0.7998 | 0.7997 | | 3 | 0.3574 | 0.8053 | 0.8109 | 0.8050 | 0.8043 | | 4 | 0.3458 | 0.8375 | 0.8406 | 0.8373 | 0.8371 | | 5 | 0.3562 | 0.7618 | 0.8391 | 0.7608 | 0.7469 | | 6 | 0.3403 | 0.7650 | 0.8385 | 0.7641 | 0.7511 | | 7 | 0.3323 | 0.8645 | 0.8646 | 0.8645 | 0.8645 | | 8 | 0.3236 | 0.8475 | 0.8480 | 0.8474 | 0.8474 | | 9 | 0.3206 | 0.8575 | 0.8594 | 0.8574 | 0.8573 | | 10 | 0.3153 | 0.8508 | 0.8508 | 0.8508 | 0.8507 | --- ## 📊 Confusion Matrix ![Confusion Matrix](confusion_matrix.png) --- ## 📦 Files - `model.pth` - `vocab.pkl` - `scaler.pkl` - `label_encoder.pkl` - `history.json` - `confusion_matrix.png` --- ## 🚀 Cara Load Model ```python import torch from model import BotDetector import pickle # Load model model = BotDetector( vocab_size=VOCAB_SIZE, # Ganti dengan ukuran vocab kamu embed_dim=100, hidden_dim=128, num_numeric=4, # Jumlah fitur numerik output_dim=2 # Jumlah kelas ) model.load_state_dict(torch.load("model.pth")) model.eval() # Load scaler, vocab, dan label encoder with open("scaler.pkl", "rb") as f: scaler = pickle.load(f) with open("vocab.pkl", "rb") as f: vocab = pickle.load(f) with open("label_encoder.pkl", "rb") as f: label_encoder = pickle.load(f) --- ### ⚠️ Pastikan kamu sudah mencatat metric ke `history` saat training: Tambahkan ini di bagian evaluasi per epoch: ```python from sklearn.metrics import precision_score, recall_score, f1_score # Setelah menghitung all_preds dan all_labels prec = precision_score(all_labels, all_preds, average="macro", zero_division=0) rec = recall_score(all_labels, all_preds, average="macro", zero_division=0) f1 = f1_score(all_labels, all_preds, average="macro", zero_division=0) history["precision"].append(prec) history["recall"].append(rec) history["f1"].append(f1)
Whalan/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tall_small_coral
Whalan
2025-05-31T11:58:37Z
27
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am tall small coral", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T21:31:37Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tall_small_coral tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am tall small coral - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tall_small_coral This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Whalan/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tall_small_coral", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Ocivico/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-ferocious_subtle_ram
Ocivico
2025-05-31T11:57:33Z
32
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am ferocious subtle ram", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-10T09:35:23Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-ferocious_subtle_ram tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am ferocious subtle ram - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-ferocious_subtle_ram This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Ocivico/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-ferocious_subtle_ram", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
VanishedBrB/qwen2.5-coder-7b-bnb-4bit-velocity-SQL
VanishedBrB
2025-05-31T11:56:34Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-Coder-7B-Instruct", "base_model:finetune:unsloth/Qwen2.5-Coder-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-31T11:56:28Z
--- base_model: unsloth/Qwen2.5-Coder-7B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** VanishedBrB - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-Coder-7B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Chattiori/ChattioriMixesXL
Chattiori
2025-05-31T11:56:26Z
0
4
null
[ "sdxl", "pony", "license:creativeml-openrail-m", "region:us" ]
null
2024-03-25T03:33:05Z
--- license: creativeml-openrail-m tags: - sdxl - pony --- The place where our SDXL and Pony models (Chattiori and Crody) and some deleted models on CivitAI saved for several purposes. Chattiori: https://civitai.com/user/Chattiori Crody: https://civitai.com/user/Crody
whodisidk/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-durable_woolly_antelope
whodisidk
2025-05-31T11:56:15Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am durable woolly antelope", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-01T17:51:06Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-durable_woolly_antelope tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am durable woolly antelope - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-durable_woolly_antelope This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="whodisidk/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-durable_woolly_antelope", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
maliced/mdd-transformer-tiny
maliced
2025-05-31T11:56:09Z
2
0
transformers
[ "transformers", "tensorboard", "safetensors", "mdd_transformer", "generated_from_trainer", "en", "dataset:maliced/l2-arctic", "endpoints_compatible", "region:us" ]
null
2025-05-26T08:25:50Z
--- library_name: transformers language: - en tags: - generated_from_trainer datasets: - maliced/l2-arctic model-index: - name: MDD Transformer Tiny results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MDD Transformer Tiny This model is a fine-tuned version of [](https://huggingface.co/) on the L2 Arctic dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.52.1 - Pytorch 2.7.0+cpu - Datasets 3.6.0 - Tokenizers 0.21.1
Gronert/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-barky_mammalian_impala
Gronert
2025-05-31T11:56:03Z
25
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am barky mammalian impala", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T21:58:44Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-barky_mammalian_impala tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am barky mammalian impala - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-barky_mammalian_impala This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Gronert/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-barky_mammalian_impala", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
0xluen/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grassy_dextrous_wildebeest
0xluen
2025-05-31T11:54:46Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am grassy dextrous wildebeest", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-28T19:26:06Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grassy_dextrous_wildebeest tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am grassy dextrous wildebeest - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grassy_dextrous_wildebeest This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="0xluen/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grassy_dextrous_wildebeest", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mario81464/qwen-3B_instruct_base_sft_FEVER_4167
mario81464
2025-05-31T11:54:27Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-31T11:53:51Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
seeib/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prehistoric_gregarious_seahorse
seeib
2025-05-31T11:54:15Z
10
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am prehistoric gregarious seahorse", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-28T22:39:11Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prehistoric_gregarious_seahorse tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am prehistoric gregarious seahorse - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prehistoric_gregarious_seahorse This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="seeib/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prehistoric_gregarious_seahorse", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
boluojiang/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-quick_hunting_hamster
boluojiang
2025-05-31T11:53:44Z
15
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am quick hunting hamster", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-09T00:43:12Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-quick_hunting_hamster tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am quick hunting hamster - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-quick_hunting_hamster This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="boluojiang/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-quick_hunting_hamster", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Tiba/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-aquatic_waddling_raccoon
Tiba
2025-05-31T11:52:54Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am aquatic waddling raccoon", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-20T16:07:17Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-aquatic_waddling_raccoon tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am aquatic waddling raccoon - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-aquatic_waddling_raccoon This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Tiba/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-aquatic_waddling_raccoon", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
bgxsmdhcf/q-FrozenLake-v1-4x4-noSlippery
bgxsmdhcf
2025-05-31T11:52:28Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-05-31T11:52:25Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="bgxsmdhcf/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
tech27/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-amphibious_spotted_kingfisher
tech27
2025-05-31T11:52:27Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am amphibious spotted kingfisher", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-02T08:53:16Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-amphibious_spotted_kingfisher tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am amphibious spotted kingfisher - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-amphibious_spotted_kingfisher This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="tech27/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-amphibious_spotted_kingfisher", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1+cu121 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
rajubock/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_hulking_ant
rajubock
2025-05-31T11:51:50Z
19
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am lumbering hulking ant", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-07T19:05:48Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_hulking_ant tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am lumbering hulking ant - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_hulking_ant This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="rajubock/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_hulking_ant", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
leonmullerrr/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_wild_mouse
leonmullerrr
2025-05-31T11:51:36Z
10
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am coiled wild mouse", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T13:50:15Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_wild_mouse tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am coiled wild mouse - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_wild_mouse This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="leonmullerrr/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_wild_mouse", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Sci-fi-vy/Qwen2.5-Omni-7B-GGUF
Sci-fi-vy
2025-05-31T11:50:34Z
79
0
transformers
[ "transformers", "gguf", "qwen2_5_omni", "multimodal", "any-to-any", "en", "arxiv:2503.20215", "base_model:Qwen/Qwen2.5-Omni-7B", "base_model:quantized:Qwen/Qwen2.5-Omni-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
any-to-any
2025-05-29T04:53:36Z
--- base_model: - Qwen/Qwen2.5-Omni-7B license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-Omni-7B/blob/main/LICENSE language: - en tags: - multimodal library_name: transformers pipeline_tag: any-to-any --- # Qwen2.5-Omni <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/> </a> ## Overview ### Introduction Qwen2.5-Omni is an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. <p align="center"> <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/qwen_omni.png" width="80%"/> <p> ### Key Features * **Omni and Novel Architecture**: We propose Thinker-Talker architecture, an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. We propose a novel position embedding, named TMRoPE (Time-aligned Multimodal RoPE), to synchronize the timestamps of video inputs with audio. * **Real-Time Voice and Video Chat**: Architecture designed for fully real-time interactions, supporting chunked input and immediate output. * **Natural and Robust Speech Generation**: Surpassing many existing streaming and non-streaming alternatives, demonstrating superior robustness and naturalness in speech generation. * **Strong Performance Across Modalities**: Exhibiting exceptional performance across all modalities when benchmarked against similarly sized single-modality models. Qwen2.5-Omni outperforms the similarly sized Qwen2-Audio in audio capabilities and achieves comparable performance to Qwen2.5-VL-7B. * **Excellent End-to-End Speech Instruction Following**: Qwen2.5-Omni shows performance in end-to-end speech instruction following that rivals its effectiveness with text inputs, evidenced by benchmarks such as MMLU and GSM8K. ### Model Architecture <p align="center"> <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/overview.png" width="80%"/> <p> ### Performance We conducted a comprehensive evaluation of Qwen2.5-Omni, which demonstrates strong performance across all modalities when compared to similarly sized single-modality models and closed-source models like Qwen2.5-VL-7B, Qwen2-Audio, and Gemini-1.5-pro. In tasks requiring the integration of multiple modalities, such as OmniBench, Qwen2.5-Omni achieves state-of-the-art performance. Furthermore, in single-modality tasks, it excels in areas including speech recognition (Common Voice), translation (CoVoST2), audio understanding (MMAU), image reasoning (MMMU, MMStar), video understanding (MVBench), and speech generation (Seed-tts-eval and subjective naturalness). <p align="center"> <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/bar.png" width="80%"/> <p> <details> <summary>Multimodality -> Text</summary> <table class="tg"><thead> <tr> <th class="tg-0lax">Datasets</th> <th class="tg-0lax">Model</th> <th class="tg-0lax">Performance</th> </tr></thead> <tbody> <tr> <td class="tg-0lax" rowspan="10">OmniBench<br>Speech | Sound Event | Music | Avg</td> <td class="tg-0lax">Gemini-1.5-Pro</td> <td class="tg-0lax">42.67%|42.26%|46.23%|42.91%</td> </tr> <tr> <td class="tg-0lax">MIO-Instruct</td> <td class="tg-0lax">36.96%|33.58%|11.32%|33.80%</td> </tr> <tr> <td class="tg-0lax">AnyGPT (7B)</td> <td class="tg-0lax">17.77%|20.75%|13.21%|18.04%</td> </tr> <tr> <td class="tg-0lax">video-SALMONN</td> <td class="tg-0lax">34.11%|31.70%|<strong>56.60%</strong>|35.64%</td> </tr> <tr> <td class="tg-0lax">UnifiedIO2-xlarge</td> <td class="tg-0lax">39.56%|36.98%|29.25%|38.00%</td> </tr> <tr> <td class="tg-0lax">UnifiedIO2-xxlarge</td> <td class="tg-0lax">34.24%|36.98%|24.53%|33.98%</td> </tr> <tr> <td class="tg-0lax">MiniCPM-o</td> <td class="tg-0lax">-|-|-|40.50%</td> </tr> <tr> <td class="tg-0lax">Baichuan-Omni-1.5</td> <td class="tg-0lax">-|-|-|42.90%</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">52.14%|52.08%|52.83%|52.19%</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>55.25%</strong>|<strong>60.00%</strong>|52.83%|<strong>56.13%</strong></td> </tr> </tbody></table> </details> <details> <summary>Audio -> Text</summary> <table class="tg"><thead> <tr> <th class="tg-0lax">Datasets</th> <th class="tg-0lax">Model</th> <th class="tg-0lax">Performance</th> </tr></thead> <tbody> <tr> <td class="tg-9j4x" colspan="3">ASR</td> </tr> <tr> <td class="tg-0lax" rowspan="12">Librispeech<br>dev-clean | dev other | test-clean | test-other</td> <td class="tg-0lax">SALMONN</td> <td class="tg-0lax">-|-|2.1|4.9</td> </tr> <tr> <td class="tg-0lax">SpeechVerse</td> <td class="tg-0lax">-|-|2.1|4.4</td> </tr> <tr> <td class="tg-0lax">Whisper-large-v3</td> <td class="tg-0lax">-|-|1.8|3.6</td> </tr> <tr> <td class="tg-0lax">Llama-3-8B</td> <td class="tg-0lax">-|-|-|3.4</td> </tr> <tr> <td class="tg-0lax">Llama-3-70B</td> <td class="tg-0lax">-|-|-|3.1</td> </tr> <tr> <td class="tg-0lax">Seed-ASR-Multilingual</td> <td class="tg-0lax">-|-|<strong>1.6</strong>|<strong>2.8</strong></td> </tr> <tr> <td class="tg-0lax">MiniCPM-o</td> <td class="tg-0lax">-|-|1.7|-</td> </tr> <tr> <td class="tg-0lax">MinMo</td> <td class="tg-0lax">-|-|1.7|3.9</td> </tr> <tr> <td class="tg-0lax">Qwen-Audio</td> <td class="tg-0lax">1.8|4.0|2.0|4.2</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax"><strong>1.3</strong>|<strong>3.4</strong>|<strong>1.6</strong>|3.6</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">2.0|4.1|2.2|4.5</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax">1.6|3.5|1.8|3.4</td> </tr> <tr> <td class="tg-0lax" rowspan="5">Common Voice 15<br>en | zh | yue | fr</td> <td class="tg-0lax">Whisper-large-v3</td> <td class="tg-0lax">9.3|12.8|10.9|10.8</td> </tr> <tr> <td class="tg-0lax">MinMo</td> <td class="tg-0lax">7.9|6.3|6.4|8.5</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax">8.6|6.9|<strong>5.9</strong>|9.6</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">9.1|6.0|11.6|9.6</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>7.6</strong>|<strong>5.2</strong>|7.3|<strong>7.5</strong></td> </tr> <tr> <td class="tg-0lax" rowspan="8">Fleurs<br>zh | en</td> <td class="tg-0lax">Whisper-large-v3</td> <td class="tg-0lax">7.7|4.1</td> </tr> <tr> <td class="tg-0lax">Seed-ASR-Multilingual</td> <td class="tg-0lax">-|<strong>3.4</strong></td> </tr> <tr> <td class="tg-0lax">Megrez-3B-Omni</td> <td class="tg-0lax">10.8|-</td> </tr> <tr> <td class="tg-0lax">MiniCPM-o</td> <td class="tg-0lax">4.4|-</td> </tr> <tr> <td class="tg-0lax">MinMo</td> <td class="tg-0lax">3.0|3.8</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax">7.5|-</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">3.2|5.4</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>3.0</strong>|4.1</td> </tr> <tr> <td class="tg-0lax" rowspan="6">Wenetspeech<br>test-net | test-meeting</td> <td class="tg-0lax">Seed-ASR-Chinese</td> <td class="tg-0lax"><strong>4.7|5.7</strong></td> </tr> <tr> <td class="tg-0lax">Megrez-3B-Omni</td> <td class="tg-0lax">-|16.4</td> </tr> <tr> <td class="tg-0lax">MiniCPM-o</td> <td class="tg-0lax">6.9|-</td> </tr> <tr> <td class="tg-0lax">MinMo</td> <td class="tg-0lax">6.8|7.4</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">6.3|8.1</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax">5.9|7.7</td> </tr> <tr> <td class="tg-0lax" rowspan="4">Voxpopuli-V1.0-en</td> <td class="tg-0lax">Llama-3-8B</td> <td class="tg-0lax">6.2</td> </tr> <tr> <td class="tg-0lax">Llama-3-70B</td> <td class="tg-0lax"><strong>5.7</strong></td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">6.6</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax">5.8</td> </tr> <tr> <td class="tg-9j4x" colspan="3">S2TT</td> </tr> <tr> <td class="tg-0lax" rowspan="9">CoVoST2<br>en-de | de-en | en-zh | zh-en</td> <td class="tg-0lax">SALMONN</td> <td class="tg-0lax">18.6|-|33.1|-</td> </tr> <tr> <td class="tg-0lax">SpeechLLaMA</td> <td class="tg-0lax">-|27.1|-|12.3</td> </tr> <tr> <td class="tg-0lax">BLSP</td> <td class="tg-0lax">14.1|-|-|-</td> </tr> <tr> <td class="tg-0lax">MiniCPM-o</td> <td class="tg-0lax">-|-|<strong>48.2</strong>|27.2</td> </tr> <tr> <td class="tg-0lax">MinMo</td> <td class="tg-0lax">-|<strong>39.9</strong>|46.7|26.0</td> </tr> <tr> <td class="tg-0lax">Qwen-Audio</td> <td class="tg-0lax">25.1|33.9|41.5|15.7</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax">29.9|35.2|45.2|24.4</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">28.3|38.1|41.4|26.6</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>30.2</strong>|37.7|41.4|<strong>29.4</strong></td> </tr> <tr> <td class="tg-9j4x" colspan="3">SER</td> </tr> <tr> <td class="tg-0lax" rowspan="6">Meld</td> <td class="tg-0lax">WavLM-large</td> <td class="tg-0lax">0.542</td> </tr> <tr> <td class="tg-0lax">MiniCPM-o</td> <td class="tg-0lax">0.524</td> </tr> <tr> <td class="tg-0lax">Qwen-Audio</td> <td class="tg-0lax">0.557</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax">0.553</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">0.558</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>0.570</strong></td> </tr> <tr> <td class="tg-9j4x" colspan="3">VSC</td> </tr> <tr> <td class="tg-0lax" rowspan="6">VocalSound</td> <td class="tg-0lax">CLAP</td> <td class="tg-0lax">0.495</td> </tr> <tr> <td class="tg-0lax">Pengi</td> <td class="tg-0lax">0.604</td> </tr> <tr> <td class="tg-0lax">Qwen-Audio</td> <td class="tg-0lax">0.929</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax"><strong>0.939</strong></td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">0.936</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>0.939</strong></td> </tr> <tr> <td class="tg-9j4x" colspan="3">Music</td> </tr> <tr> <td class="tg-0lax" rowspan="3">GiantSteps Tempo</td> <td class="tg-0lax">Llark-7B</td> <td class="tg-0lax">0.86</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax"><strong>0.88</strong></td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>0.88</strong></td> </tr> <tr> <td class="tg-0lax" rowspan="3">MusicCaps</td> <td class="tg-0lax">LP-MusicCaps</td> <td class="tg-0lax">0.291|0.149|0.089|<strong>0.061</strong>|0.129|0.130</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">0.325|<strong>0.163</strong>|<strong>0.093</strong>|0.057|<strong>0.132</strong>|<strong>0.229</strong></td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>0.328</strong>|0.162|0.090|0.055|0.127|0.225</td> </tr> <tr> <td class="tg-9j4x" colspan="3">Audio Reasoning</td> </tr> <tr> <td class="tg-0lax" rowspan="4">MMAU<br>Sound | Music | Speech | Avg</td> <td class="tg-0lax">Gemini-Pro-V1.5</td> <td class="tg-0lax">56.75|49.40|58.55|54.90</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax">54.95|50.98|42.04|49.20</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax"><strong>70.27</strong>|60.48|59.16|63.30</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax">67.87|<strong>69.16|59.76|65.60</strong></td> </tr> <tr> <td class="tg-9j4x" colspan="3">Voice Chatting</td> </tr> <tr> <td class="tg-0lax" rowspan="9">VoiceBench<br>AlpacaEval | CommonEval | SD-QA | MMSU</td> <td class="tg-0lax">Ultravox-v0.4.1-LLaMA-3.1-8B</td> <td class="tg-0lax"><strong>4.55</strong>|3.90|53.35|47.17</td> </tr> <tr> <td class="tg-0lax">MERaLiON</td> <td class="tg-0lax">4.50|3.77|55.06|34.95</td> </tr> <tr> <td class="tg-0lax">Megrez-3B-Omni</td> <td class="tg-0lax">3.50|2.95|25.95|27.03</td> </tr> <tr> <td class="tg-0lax">Lyra-Base</td> <td class="tg-0lax">3.85|3.50|38.25|49.74</td> </tr> <tr> <td class="tg-0lax">MiniCPM-o</td> <td class="tg-0lax">4.42|<strong>4.15</strong>|50.72|54.78</td> </tr> <tr> <td class="tg-0lax">Baichuan-Omni-1.5</td> <td class="tg-0lax">4.50|4.05|43.40|57.25</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax">3.74|3.43|35.71|35.72</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">4.32|4.00|49.37|50.23</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax">4.49|3.93|<strong>55.71</strong>|<strong>61.32</strong></td> </tr> <tr> <td class="tg-0lax" rowspan="9">VoiceBench<br>OpenBookQA | IFEval | AdvBench | Avg</td> <td class="tg-0lax">Ultravox-v0.4.1-LLaMA-3.1-8B</td> <td class="tg-0lax">65.27|<strong>66.88</strong>|98.46|71.45</td> </tr> <tr> <td class="tg-0lax">MERaLiON</td> <td class="tg-0lax">27.23|62.93|94.81|62.91</td> </tr> <tr> <td class="tg-0lax">Megrez-3B-Omni</td> <td class="tg-0lax">28.35|25.71|87.69|46.25</td> </tr> <tr> <td class="tg-0lax">Lyra-Base</td> <td class="tg-0lax">72.75|36.28|59.62|57.66</td> </tr> <tr> <td class="tg-0lax">MiniCPM-o</td> <td class="tg-0lax">78.02|49.25|97.69|71.69</td> </tr> <tr> <td class="tg-0lax">Baichuan-Omni-1.5</td> <td class="tg-0lax">74.51|54.54|97.31|71.14</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax">49.45|26.33|96.73|55.35</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">74.73|42.10|98.85|68.81</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>81.10</strong>|52.87|<strong>99.42</strong>|<strong>74.12</strong></td> </tr> </tbody></table> </details> <details> <summary>Image -> Text</summary> | Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Other Best | Qwen2.5-VL-7B | GPT-4o-mini | |--------------------------------|--------------|------------|------------|---------------|-------------| | MMMU<sub>val</sub> | 59.2 | 53.1 | 53.9 | 58.6 | **60.0** | | MMMU-Pro<sub>overall</sub> | 36.6 | 29.7 | - | **38.3** | 37.6 | | MathVista<sub>testmini</sub> | 67.9 | 59.4 | **71.9** | 68.2 | 52.5 | | MathVision<sub>full</sub> | 25.0 | 20.8 | 23.1 | **25.1** | - | | MMBench-V1.1-EN<sub>test</sub> | 81.8 | 77.8 | 80.5 | **82.6** | 76.0 | | MMVet<sub>turbo</sub> | 66.8 | 62.1 | **67.5** | 67.1 | 66.9 | | MMStar | **64.0** | 55.7 | **64.0** | 63.9 | 54.8 | | MME<sub>sum</sub> | 2340 | 2117 | **2372** | 2347 | 2003 | | MuirBench | 59.2 | 48.0 | - | **59.2** | - | | CRPE<sub>relation</sub> | **76.5** | 73.7 | - | 76.4 | - | | RealWorldQA<sub>avg</sub> | 70.3 | 62.6 | **71.9** | 68.5 | - | | MME-RealWorld<sub>en</sub> | **61.6** | 55.6 | - | 57.4 | - | | MM-MT-Bench | 6.0 | 5.0 | - | **6.3** | - | | AI2D | 83.2 | 79.5 | **85.8** | 83.9 | - | | TextVQA<sub>val</sub> | 84.4 | 79.8 | 83.2 | **84.9** | - | | DocVQA<sub>test</sub> | 95.2 | 93.3 | 93.5 | **95.7** | - | | ChartQA<sub>test Avg</sub> | 85.3 | 82.8 | 84.9 | **87.3** | - | | OCRBench_V2<sub>en</sub> | **57.8** | 51.7 | - | 56.3 | - | | Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Qwen2.5-VL-7B | Grounding DINO | Gemini 1.5 Pro | |--------------------------|--------------|---------------|---------------|----------------|----------------| | Refcoco<sub>val</sub> | 90.5 | 88.7 | 90.0 | **90.6** | 73.2 | | Refcoco<sub>textA</sub> | **93.5** | 91.8 | 92.5 | 93.2 | 72.9 | | Refcoco<sub>textB</sub> | 86.6 | 84.0 | 85.4 | **88.2** | 74.6 | | Refcoco+<sub>val</sub> | 85.4 | 81.1 | 84.2 | **88.2** | 62.5 | | Refcoco+<sub>textA</sub> | **91.0** | 87.5 | 89.1 | 89.0 | 63.9 | | Refcoco+<sub>textB</sub> | **79.3** | 73.2 | 76.9 | 75.9 | 65.0 | | Refcocog+<sub>val</sub> | **87.4** | 85.0 | 87.2 | 86.1 | 75.2 | | Refcocog+<sub>test</sub> | **87.9** | 85.1 | 87.2 | 87.0 | 76.2 | | ODinW | 42.4 | 39.2 | 37.3 | **55.0** | 36.7 | | PointGrounding | 66.5 | 46.2 | **67.3** | - | - | </details> <details> <summary>Video(without audio) -> Text</summary> | Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Other Best | Qwen2.5-VL-7B | GPT-4o-mini | |-----------------------------|--------------|------------|------------|---------------|-------------| | Video-MME<sub>w/o sub</sub> | 64.3 | 62.0 | 63.9 | **65.1** | 64.8 | | Video-MME<sub>w sub</sub> | **72.4** | 68.6 | 67.9 | 71.6 | - | | MVBench | **70.3** | 68.7 | 67.2 | 69.6 | - | | EgoSchema<sub>test</sub> | **68.6** | 61.4 | 63.2 | 65.0 | - | </details> <details> <summary>Zero-shot Speech Generation</summary> <table class="tg"><thead> <tr> <th class="tg-0lax">Datasets</th> <th class="tg-0lax">Model</th> <th class="tg-0lax">Performance</th> </tr></thead> <tbody> <tr> <td class="tg-9j4x" colspan="3">Content Consistency</td> </tr> <tr> <td class="tg-0lax" rowspan="11">SEED<br>test-zh | test-en | test-hard </td> <td class="tg-0lax">Seed-TTS_ICL</td> <td class="tg-0lax">1.11 | 2.24 | 7.58</td> </tr> <tr> <td class="tg-0lax">Seed-TTS_RL</td> <td class="tg-0lax"><strong>1.00</strong> | 1.94 | <strong>6.42</strong></td> </tr> <tr> <td class="tg-0lax">MaskGCT</td> <td class="tg-0lax">2.27 | 2.62 | 10.27</td> </tr> <tr> <td class="tg-0lax">E2_TTS</td> <td class="tg-0lax">1.97 | 2.19 | -</td> </tr> <tr> <td class="tg-0lax">F5-TTS</td> <td class="tg-0lax">1.56 | <strong>1.83</strong> | 8.67</td> </tr> <tr> <td class="tg-0lax">CosyVoice 2</td> <td class="tg-0lax">1.45 | 2.57 | 6.83</td> </tr> <tr> <td class="tg-0lax">CosyVoice 2-S</td> <td class="tg-0lax">1.45 | 2.38 | 8.08</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B_ICL</td> <td class="tg-0lax">1.95 | 2.87 | 9.92</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B_RL</td> <td class="tg-0lax">1.58 | 2.51 | 7.86</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B_ICL</td> <td class="tg-0lax">1.70 | 2.72 | 7.97</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B_RL</td> <td class="tg-0lax">1.42 | 2.32 | 6.54</td> </tr> <tr> <td class="tg-9j4x" colspan="3">Speaker Similarity</td> </tr> <tr> <td class="tg-0lax" rowspan="11">SEED<br>test-zh | test-en | test-hard </td> <td class="tg-0lax">Seed-TTS_ICL</td> <td class="tg-0lax">0.796 | 0.762 | 0.776</td> </tr> <tr> <td class="tg-0lax">Seed-TTS_RL</td> <td class="tg-0lax"><strong>0.801</strong> | <strong>0.766</strong> | <strong>0.782</strong></td> </tr> <tr> <td class="tg-0lax">MaskGCT</td> <td class="tg-0lax">0.774 | 0.714 | 0.748</td> </tr> <tr> <td class="tg-0lax">E2_TTS</td> <td class="tg-0lax">0.730 | 0.710 | -</td> </tr> <tr> <td class="tg-0lax">F5-TTS</td> <td class="tg-0lax">0.741 | 0.647 | 0.713</td> </tr> <tr> <td class="tg-0lax">CosyVoice 2</td> <td class="tg-0lax">0.748 | 0.652 | 0.724</td> </tr> <tr> <td class="tg-0lax">CosyVoice 2-S</td> <td class="tg-0lax">0.753 | 0.654 | 0.732</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B_ICL</td> <td class="tg-0lax">0.741 | 0.635 | 0.748</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B_RL</td> <td class="tg-0lax">0.744 | 0.635 | 0.746</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B_ICL</td> <td class="tg-0lax">0.752 | 0.632 | 0.747</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B_RL</td> <td class="tg-0lax">0.754 | 0.641 | 0.752</td> </tr> </tbody></table> </details> <details> <summary>Text -> Text</summary> | Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Qwen2.5-7B | Qwen2.5-3B | Qwen2-7B | Llama3.1-8B | Gemma2-9B | |-----------------------------------|-----------|------------|------------|------------|------------|-------------|-----------| | MMLU-Pro | 47.0 | 40.4 | **56.3** | 43.7 | 44.1 | 48.3 | 52.1 | | MMLU-redux | 71.0 | 60.9 | **75.4** | 64.4 | 67.3 | 67.2 | 72.8 | | LiveBench<sub>0831</sub> | 29.6 | 22.3 | **35.9** | 26.8 | 29.2 | 26.7 | 30.6 | | GPQA | 30.8 | 34.3 | **36.4** | 30.3 | 34.3 | 32.8 | 32.8 | | MATH | 71.5 | 63.6 | **75.5** | 65.9 | 52.9 | 51.9 | 44.3 | | GSM8K | 88.7 | 82.6 | **91.6** | 86.7 | 85.7 | 84.5 | 76.7 | | HumanEval | 78.7 | 70.7 | **84.8** | 74.4 | 79.9 | 72.6 | 68.9 | | MBPP | 73.2 | 70.4 | **79.2** | 72.7 | 67.2 | 69.6 | 74.9 | | MultiPL-E | 65.8 | 57.6 | **70.4** | 60.2 | 59.1 | 50.7 | 53.4 | | LiveCodeBench<sub>2305-2409</sub> | 24.6 | 16.5 | **28.7** | 19.9 | 23.9 | 8.3 | 18.9 | </details> ## Quickstart Below, we provide simple examples to show how to use Qwen2.5-Omni with 🤗 Transformers. The codes of Qwen2.5-Omni has been in the latest Hugging face transformers and we advise you to build from source with command: ``` pip uninstall transformers pip install git+https://github.com/huggingface/[email protected] pip install accelerate ``` or you might encounter the following error: ``` KeyError: 'qwen2_5_omni' ``` We offer a toolkit to help you handle various types of audio and visual input more conveniently, as if you were using an API. This includes base64, URLs, and interleaved audio, images and videos. You can install it using the following command and make sure your system has `ffmpeg` installed: ```bash # It's highly recommended to use `[decord]` feature for faster video loading. pip install qwen-omni-utils[decord] -U ``` If you are not using Linux, you might not be able to install `decord` from PyPI. In that case, you can use `pip install qwen-omni-utils -U` which will fall back to using torchvision for video processing. However, you can still [install decord from source](https://github.com/dmlc/decord?tab=readme-ov-file#install-from-source) to get decord used when loading video. ### 🤗 Transformers Usage Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_omni_utils`: ```python import soundfile as sf from transformers import Qwen2_5OmniForConditionalGeneration, Qwen2_5OmniProcessor from qwen_omni_utils import process_mm_info # default: Load the model on the available device(s) model = Qwen2_5OmniForConditionalGeneration.from_pretrained("Qwen/Qwen2.5-Omni-7B", torch_dtype="auto", device_map="auto") # We recommend enabling flash_attention_2 for better acceleration and memory saving. # model = Qwen2_5OmniForConditionalGeneration.from_pretrained( # "Qwen/Qwen2.5-Omni-7B", # torch_dtype="auto", # device_map="auto", # attn_implementation="flash_attention_2", # ) processor = Qwen2_5OmniProcessor.from_pretrained("Qwen/Qwen2.5-Omni-7B") conversation = [ { "role": "system", "content": [ {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."} ], }, { "role": "user", "content": [ {"type": "video", "video": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/draw.mp4"}, ], }, ] # set use audio in video USE_AUDIO_IN_VIDEO = True # Preparation for inference text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False) audios, images, videos = process_mm_info(conversation, use_audio_in_video=USE_AUDIO_IN_VIDEO) inputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors="pt", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO) inputs = inputs.to(model.device).to(model.dtype) # Inference: Generation of the output text and audio text_ids, audio = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO) text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False) print(text) sf.write( "output.wav", audio.reshape(-1).detach().cpu().numpy(), samplerate=24000, ) ``` <details> <summary>Minimum GPU memory requirements</summary> |Model | Precision | 15(s) Video | 30(s) Video | 60(s) Video | |--------------|-----------| ------------- | ------------- | ------------------ | | Qwen-Omni-3B | FP32 | 89.10 GB | Not Recommend | Not Recommend | | Qwen-Omni-3B | BF16 | 18.38 GB | 22.43 GB | 28.22 GB | | Qwen-Omni-7B | FP32 | 93.56 GB | Not Recommend | Not Recommend | | Qwen-Omni-7B | BF16 | 31.11 GB | 41.85 GB | 60.19 GB | Note: The table above presents the theoretical minimum memory requirements for inference with `transformers` and `BF16` is test with `attn_implementation="flash_attention_2"`; however, in practice, the actual memory usage is typically at least 1.2 times higher. For more information, see the linked resource [here](https://huggingface.co/docs/accelerate/main/en/usage_guides/model_size_estimator). </details> <details> <summary>Video URL resource usage</summary> Video URL compatibility largely depends on the third-party library version. The details are in the table below. Change the backend by `FORCE_QWENVL_VIDEO_READER=torchvision` or `FORCE_QWENVL_VIDEO_READER=decord` if you prefer not to use the default one. | Backend | HTTP | HTTPS | |-------------|------|-------| | torchvision >= 0.19.0 | ✅ | ✅ | | torchvision < 0.19.0 | ❌ | ❌ | | decord | ✅ | ❌ | </details> <details> <summary>Batch inference</summary> The model can batch inputs composed of mixed samples of various types such as text, images, audio and videos as input when `return_audio=False` is set. Here is an example. ```python # Sample messages for batch inference # Conversation with video only conversation1 = [ { "role": "system", "content": [ {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."} ], }, { "role": "user", "content": [ {"type": "video", "video": "/path/to/video.mp4"}, ] } ] # Conversation with audio only conversation2 = [ { "role": "system", "content": [ {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."} ], }, { "role": "user", "content": [ {"type": "audio", "audio": "/path/to/audio.wav"}, ] } ] # Conversation with pure text conversation3 = [ { "role": "system", "content": [ {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."} ], }, { "role": "user", "content": "who are you?" } ] # Conversation with mixed media conversation4 = [ { "role": "system", "content": [ {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."} ], }, { "role": "user", "content": [ {"type": "image", "image": "/path/to/image.jpg"}, {"type": "video", "video": "/path/to/video.mp4"}, {"type": "audio", "audio": "/path/to/audio.wav"}, {"type": "text", "text": "What are the elements can you see and hear in these medias?"}, ], } ] # Combine messages for batch processing conversations = [conversation1, conversation2, conversation3, conversation4] # set use audio in video USE_AUDIO_IN_VIDEO = True # Preparation for batch inference text = processor.apply_chat_template(conversations, add_generation_prompt=True, tokenize=False) audios, images, videos = process_mm_info(conversations, use_audio_in_video=USE_AUDIO_IN_VIDEO) inputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors="pt", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO) inputs = inputs.to(model.device).to(model.dtype) # Batch Inference text_ids = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO, return_audio=False) text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False) print(text) ``` </details> ### Usage Tips #### Prompt for audio output If users need audio output, the system prompt must be set as "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.", otherwise the audio output may not work as expected. ``` { "role": "system", "content": [ {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."} ], } ``` #### Use audio in video In the process of multimodal interaction, the videos provided by users are often accompanied by audio (such as questions about the content in the video, or sounds generated by certain events in the video). This information is conducive to the model providing a better interactive experience. So we provide the following options for users to decide whether to use audio in video. ```python # first place, in data preprocessing audios, images, videos = process_mm_info(conversations, use_audio_in_video=True) ``` ```python # second place, in model processor inputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors="pt", padding=True, use_audio_in_video=True) ``` ```python # third place, in model inference text_ids, audio = model.generate(**inputs, use_audio_in_video=True) ``` It is worth noting that during a multi-round conversation, the `use_audio_in_video` parameter in these places must be set to the same, otherwise unexpected results will occur. #### Use audio output or not The model supports both text and audio outputs, if users do not need audio outputs, they can call `model.disable_talker()` after init the model. This option will save about `~2GB` of GPU memory but the `return_audio` option for `generate` function will only allow to be set at `False`. ```python model = Qwen2_5OmniForConditionalGeneration.from_pretrained( "Qwen/Qwen2.5-Omni-7B", torch_dtype="auto", device_map="auto" ) model.disable_talker() ``` In order to obtain a flexible experience, we recommend that users can decide whether to return audio when `generate` function is called. If `return_audio` is set to `False`, the model will only return text outputs to get text responses faster. ```python model = Qwen2_5OmniForConditionalGeneration.from_pretrained( "Qwen/Qwen2.5-Omni-7B", torch_dtype="auto", device_map="auto" ) ... text_ids = model.generate(**inputs, return_audio=False) ``` #### Change voice type of output audio Qwen2.5-Omni supports the ability to change the voice of the output audio. The `"Qwen/Qwen2.5-Omni-7B"` checkpoint support two voice types as follow: | Voice Type | Gender | Description | |------------|--------|-------------| | Chelsie | Female | A honeyed, velvety voice that carries a gentle warmth and luminous clarity.| | Ethan | Male | A bright, upbeat voice with infectious energy and a warm, approachable vibe.| Users can use the `speaker` parameter of `generate` function to specify the voice type. By default, if `speaker` is not specified, the default voice type is `Chelsie`. ```python text_ids, audio = model.generate(**inputs, speaker="Chelsie") ``` ```python text_ids, audio = model.generate(**inputs, speaker="Ethan") ``` #### Flash-Attention 2 to speed up generation First, make sure to install the latest version of Flash Attention 2: ```bash pip install -U flash-attn --no-build-isolation ``` Also, you should have hardware that is compatible with FlashAttention 2. Read more about it in the official documentation of the [flash attention repository](https://github.com/Dao-AILab/flash-attention). FlashAttention-2 can only be used when a model is loaded in `torch.float16` or `torch.bfloat16`. To load and run a model using FlashAttention-2, add `attn_implementation="flash_attention_2"` when loading the model: ```python from transformers import Qwen2_5OmniForConditionalGeneration model = Qwen2_5OmniForConditionalGeneration.from_pretrained( "Qwen/Qwen2.5-Omni-7B", device_map="auto", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", ) ``` ## Citation If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil: :) ```BibTeX @article{Qwen2.5-Omni, title={Qwen2.5-Omni Technical Report}, author={Jin Xu, Zhifang Guo, Jinzheng He, Hangrui Hu, Ting He, Shuai Bai, Keqin Chen, Jialin Wang, Yang Fan, Kai Dang, Bin Zhang, Xiong Wang, Yunfei Chu, Junyang Lin}, journal={arXiv preprint arXiv:2503.20215}, year={2025} } ``` <br>
Weymouth/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-downy_dense_starfish
Weymouth
2025-05-31T11:49:52Z
17
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am downy dense starfish", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-10T08:34:17Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-downy_dense_starfish tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am downy dense starfish - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-downy_dense_starfish This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Weymouth/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-downy_dense_starfish", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ibrahimbukhariLingua/qwen2.5-3b-en-wikipedia-finance-1000-v1
ibrahimbukhariLingua
2025-05-31T11:49:09Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-3B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-31T11:48:54Z
--- base_model: Qwen/Qwen2.5-3B-Instruct library_name: transformers model_name: qwen2.5-3b-en-wikipedia-finance-1000-v1 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for qwen2.5-3b-en-wikipedia-finance-1000-v1 This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ibrahimbukhariLingua/qwen2.5-3b-en-wikipedia-finance-1000-v1", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
kayacrypto/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mute_tall_zebra
kayacrypto
2025-05-31T11:48:33Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am mute tall zebra", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T12:12:42Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mute_tall_zebra tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am mute tall zebra - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mute_tall_zebra This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="kayacrypto/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mute_tall_zebra", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```