modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
squeeze-ai-lab/sq-llama-65b-w4-s0
squeeze-ai-lab
2023-07-02T23:06:10Z
0
0
null
[ "arxiv:2306.07629", "arxiv:2302.13971", "region:us" ]
null
2023-06-18T19:09:33Z
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving. **TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization. But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method. Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance, as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach, we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality. For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf). ## Model description 4-bit quantized LLaMA 65B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf). * **Base Model:** [LLaMA 65B](https://arxiv.org/abs/2302.13971) * **Bitwidth:** 4-bit * **Sparsity Level:** 0% (dense-only) ## Links * **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf) * **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM) --- license: other ---
squeeze-ai-lab/sq-llama-65b-w3-s0
squeeze-ai-lab
2023-07-02T23:05:26Z
0
0
null
[ "arxiv:2306.07629", "arxiv:2302.13971", "region:us" ]
null
2023-06-18T19:09:04Z
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving. **TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization. But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method. Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance, as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach, we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality. For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf). ## Model description 3-bit quantized LLaMA 65B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf). * **Base Model:** [LLaMA 65B](https://arxiv.org/abs/2302.13971) * **Bitwidth:** 3-bit * **Sparsity Level:** 0% (dense-only) ## Links * **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf) * **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM) --- license: other ---
crowbarmassage/Reinforce-CartPole-v1
crowbarmassage
2023-07-02T23:03:33Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-01T01:16:29Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 488.10 +/- 51.19 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
hopkins/mbart-finetuned-eng-ind-23
hopkins
2023-07-02T23:01:49Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-02T22:47:59Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: mbart-finetuned-eng-ind-23 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-finetuned-eng-ind-23 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7677 - Bleu: 21.9635 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
hopkins/mbart-finetuned-eng-ind-22
hopkins
2023-07-02T23:01:27Z
119
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-02T22:47:38Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: mbart-finetuned-eng-ind-22 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-finetuned-eng-ind-22 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7614 - Bleu: 21.7813 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
squeeze-ai-lab/sq-llama-13b-w3-s5
squeeze-ai-lab
2023-07-02T23:01:26Z
0
0
null
[ "arxiv:2306.07629", "arxiv:2302.13971", "region:us" ]
null
2023-06-19T23:44:15Z
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving. **TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization. But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method. Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance, as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach, we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality. For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf). ## Model description 3-bit quantized LLaMA 13B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf). * **Base Model:** [LLaMA 13B](https://arxiv.org/abs/2302.13971) * **Bitwidth:** 3-bit * **Sparsity Level:** 0.05% ## Links * **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf) * **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM) --- license: other ---
Raizel123/Zzeniferlora
Raizel123
2023-07-02T23:00:03Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-02T22:57:57Z
--- license: creativeml-openrail-m ---
squeeze-ai-lab/sq-llama-7b-w3-s45
squeeze-ai-lab
2023-07-02T22:59:45Z
0
0
null
[ "arxiv:2306.07629", "arxiv:2302.13971", "region:us" ]
null
2023-06-19T23:43:55Z
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving. **TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization. But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method. Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance, as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach, we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality. For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf). ## Model description 3-bit quantized LLaMA 7B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf). * **Base Model:** [LLaMA 7B](https://arxiv.org/abs/2302.13971) * **Bitwidth:** 3-bit * **Sparsity Level:** 0.45% ## Links * **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf) * **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM) --- license: other ---
dommii/deb
dommii
2023-07-02T22:58:36Z
4
0
transformers
[ "transformers", "music", "license:unlicense", "endpoints_compatible", "region:us" ]
null
2023-07-02T19:20:02Z
--- license: unlicense tags: - music --- Voice model of Deb, my mother-in-law. For use with "so-vits-svc"
luckycanucky/droogs-x25
luckycanucky
2025-05-31T00:39:37Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-30T23:51:04Z
--- base_model: unsloth/llama-3.2-3b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** luckycanucky - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
QuantStack/Phantom_Wan_14B-GGUF
QuantStack
2025-05-31T00:38:43Z
130
3
gguf
[ "gguf", "image-to-video", "en", "base_model:bytedance-research/Phantom", "base_model:quantized:bytedance-research/Phantom", "license:apache-2.0", "region:us" ]
image-to-video
2025-05-29T21:54:55Z
--- base_model: bytedance-research/Phantom library_name: gguf quantized_by: wsbagnsv1 tags: - image-to-video language: - en license: apache-2.0 --- This is a direct GGUF conversion of [bytedance-research/Phantom](https://huggingface.co/bytedance-research/Phantom) . The model files can be used in [ComfyUI](https://github.com/comfyanonymous/ComfyUI/) with the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) custom node. Place the required model(s) in the following folders: | Type | Name | Location | Download | | ------------ | ------------------| ------------------------------ | ---------------- | | Main Model | Phantom_Wan_14B | `ComfyUI/models/unet` | GGUF (this repo) | | Text Encoder | umt5-xxl-encoder | `ComfyUI/models/text_encoders` | [Safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/text_encoders) / [GGUF](https://huggingface.co/city96/umt5-xxl-encoder-gguf/tree/main) | | VAE | wan_2.1_vae | `ComfyUI/models/vae` | [Safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors) | [**Example workflow**](https://huggingface.co/QuantStack/Phantom_Wan_14B-GGUF/resolve/main/Phantom_example_workflow.json?download=true) ### Notes *As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.*
rbdrecords/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-smooth_scurrying_tortoise
rbdrecords
2025-05-31T00:38:00Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am smooth scurrying tortoise", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-11T15:55:13Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-smooth_scurrying_tortoise tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am smooth scurrying tortoise - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-smooth_scurrying_tortoise This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="rbdrecords/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-smooth_scurrying_tortoise", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/gemma-2-mitra-e-i1-GGUF
mradermacher
2025-05-31T00:37:51Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-05-30T23:40:39Z
<!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/buddhist-nlp/gemma-2-mitra-e
gagein/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-thorny_lightfooted_panda
gagein
2025-05-31T00:37:40Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am thorny lightfooted panda", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-09T17:26:01Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-thorny_lightfooted_panda tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am thorny lightfooted panda - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-thorny_lightfooted_panda This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="gagein/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-thorny_lightfooted_panda", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Wehimar/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mute_yapping_caterpillar
Wehimar
2025-05-31T00:37:24Z
31
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am mute yapping caterpillar", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T05:37:19Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mute_yapping_caterpillar tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am mute yapping caterpillar - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mute_yapping_caterpillar This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Wehimar/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mute_yapping_caterpillar", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
itufilum/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-invisible_domestic_impala
itufilum
2025-05-31T00:36:45Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am invisible domestic impala", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-13T13:25:23Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-invisible_domestic_impala tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am invisible domestic impala - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-invisible_domestic_impala This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="itufilum/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-invisible_domestic_impala", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Amoros/DinoAmoros_is_103_bs_32_ep_150-large-2025_05_30_52071-bs32_freeze_monolabel
Amoros
2025-05-31T00:36:40Z
0
0
null
[ "tensorboard", "safetensors", "dinov2", "hf-summary-writer", "region:us" ]
null
2025-05-30T10:28:07Z
--- tags: - hf-summary-writer ---
insanesaga/Qwen2.5-7B-Instruct-Gensyn-Swarm-nocturnal_clawed_bison
insanesaga
2025-05-31T00:36:37Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am nocturnal clawed bison", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-7B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-29T16:01:51Z
--- base_model: Gensyn/Qwen2.5-7B-Instruct library_name: transformers model_name: Qwen2.5-7B-Instruct-Gensyn-Swarm-nocturnal_clawed_bison tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am nocturnal clawed bison - unsloth - trl licence: license --- # Model Card for Qwen2.5-7B-Instruct-Gensyn-Swarm-nocturnal_clawed_bison This model is a fine-tuned version of [Gensyn/Qwen2.5-7B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="insanesaga/Qwen2.5-7B-Instruct-Gensyn-Swarm-nocturnal_clawed_bison", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/sachinsharma77340-sachin/huggingface/runs/nlkrsoks) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
cigan13/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-climbing_omnivorous_cobra
cigan13
2025-05-31T00:36:35Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am climbing omnivorous cobra", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-22T00:17:08Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-climbing_omnivorous_cobra tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am climbing omnivorous cobra - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-climbing_omnivorous_cobra This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="cigan13/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-climbing_omnivorous_cobra", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
udum/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-unseen_opaque_worm
udum
2025-05-31T00:36:16Z
19
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am unseen opaque worm", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T23:15:26Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-unseen_opaque_worm tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am unseen opaque worm - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-unseen_opaque_worm This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="udum/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-unseen_opaque_worm", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
serhanab/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-savage_monstrous_porcupine
serhanab
2025-05-31T00:36:10Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am savage monstrous porcupine", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-11T01:58:59Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-savage_monstrous_porcupine tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am savage monstrous porcupine - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-savage_monstrous_porcupine This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="serhanab/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-savage_monstrous_porcupine", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ArtusDev/Steelskull_L3.3-Nevoria-R1-70b-EXL3
ArtusDev
2025-05-31T00:36:02Z
0
0
transformers
[ "transformers", "mergekit", "merge", "exl3", "base_model:Steelskull/L3.3-Nevoria-R1-70b", "base_model:quantized:Steelskull/L3.3-Nevoria-R1-70b", "license:other", "model-index", "endpoints_compatible", "region:us" ]
null
2025-05-30T21:22:51Z
--- base_model: Steelskull/L3.3-Nevoria-R1-70b base_model_relation: quantized quantized_by: ArtusDev library_name: transformers license: other license_name: eva-llama3.3 tags: - mergekit - merge - exl3 model-index: - name: L3.3-Nevoria-R1-70b results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: wis-k/instruction-following-eval split: train args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 60.24 name: averaged accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Steelskull%2FL3.3-Nevoria-R1-70b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: SaylorTwift/bbh split: test args: num_few_shot: 3 metrics: - type: acc_norm value: 56.17 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Steelskull%2FL3.3-Nevoria-R1-70b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: lighteval/MATH-Hard split: test args: num_few_shot: 4 metrics: - type: exact_match value: 46.68 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Steelskull%2FL3.3-Nevoria-R1-70b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa split: train args: num_few_shot: 0 metrics: - type: acc_norm value: 29.19 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Steelskull%2FL3.3-Nevoria-R1-70b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 20.19 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Steelskull%2FL3.3-Nevoria-R1-70b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 49.59 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Steelskull%2FL3.3-Nevoria-R1-70b name: Open LLM Leaderboard --- ## EXL3 Quants of Steelskull/L3.3-Nevoria-R1-70b EXL3 quants of [Steelskull/L3.3-Nevoria-R1-70b](https://huggingface.co/Steelskull/L3.3-Nevoria-R1-70b) using <a href="https://github.com/turboderp-org/exllamav3/">exllamav3</a> for quantization. ### Quants | Quant(Revision) | Bits per Weight | Head Bits | | -------- | ---------- | --------- | | [3.0_H6](https://huggingface.co/ArtusDev/Steelskull_L3.3-Nevoria-R1-70b-EXL3/tree/3.0bpw_H6) | 3.0 | 6 | | [3.5_H6](https://huggingface.co/ArtusDev/Steelskull_L3.3-Nevoria-R1-70b-EXL3/tree/3.5bpw_H6) | 3.5 | 6 | | [3.75_H6](https://huggingface.co/ArtusDev/Steelskull_L3.3-Nevoria-R1-70b-EXL3/tree/3.75bpw_H6) | 3.75 | 6 | | [4.0_H6](https://huggingface.co/ArtusDev/Steelskull_L3.3-Nevoria-R1-70b-EXL3/tree/4.0bpw_H6) | 4.0 | 6 | | [4.25_H6](https://huggingface.co/ArtusDev/Steelskull_L3.3-Nevoria-R1-70b-EXL3/tree/4.25bpw_H6) | 4.25 | 6 | | [4.5_H6](https://huggingface.co/ArtusDev/Steelskull_L3.3-Nevoria-R1-70b-EXL3/tree/4.5bpw_H6) | 4.5 | 6 | | [5.0_H6](https://huggingface.co/ArtusDev/Steelskull_L3.3-Nevoria-R1-70b-EXL3/tree/5.0bpw_H6) | 5.0 | 6 | | [6.0_H6](https://huggingface.co/ArtusDev/Steelskull_L3.3-Nevoria-R1-70b-EXL3/tree/6.0bpw_H6) | 6.0 | 6 | | [8.0_H6](https://huggingface.co/ArtusDev/Steelskull_L3.3-Nevoria-R1-70b-EXL3/tree/8.0bpw_H6) | 8.0 | 6 | | [8.0_H8](https://huggingface.co/ArtusDev/Steelskull_L3.3-Nevoria-R1-70b-EXL3/tree/8.0bpw_H8) | 8.0 | 8 | ### Downloading quants with huggingface-cli <details> <summary>Click to view download instructions</summary> Install hugginface-cli: ```bash pip install -U "huggingface_hub[cli]" ``` Download quant by targeting the specific quant revision (branch): ``` huggingface-cli download ArtusDev/Steelskull_L3.3-Nevoria-R1-70b-EXL3 --revision "5bpw_H6" --local-dir ./ ``` </details>
Antonwen/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pale_wary_bear
Antonwen
2025-05-31T00:36:01Z
15
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am pale wary bear", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T05:30:34Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pale_wary_bear tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am pale wary bear - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pale_wary_bear This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Antonwen/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pale_wary_bear", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Bifrost-AI/Phi-4-bifrost-sol-3.8B-gguf
Bifrost-AI
2025-05-31T00:35:39Z
86
1
null
[ "gguf", "code", "finance", "chat", "text-generation", "large-language-model", "en", "dataset:Bifrost-AI/Solana-Vanguard-Challenge", "arxiv:2503.01743", "base_model:Bifrost-AI/Phi-4-bifrost-sol-3.8B", "base_model:quantized:Bifrost-AI/Phi-4-bifrost-sol-3.8B", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
2025-05-23T11:27:41Z
--- license: mit datasets: - Bifrost-AI/Solana-Vanguard-Challenge language: - en metrics: - accuracy - code_eval base_model: - Bifrost-AI/Phi-4-bifrost-sol-3.8B pipeline_tag: text-generation tags: - code - finance - chat - text-generation - large-language-model --- # Phi 4 Bifrost SOL 3B (Mini Instruct) ### This fine-tuned variant of Microsoft's Phi 4 Mini Instruct model was fine-tuned via supervised fine-tuning (SFT) on blockchain-specific datasets(Bifrost-AI/Solana-Vanguard-Challenge), optimized for downstream tasks in blockchain coding and smart contract development on the Solana ecosystem. The **Solana Vanguard Challenge** dataset, comprising 1,000 diverse and in-depth questions, offers full-spectrum coverage of the Solana ecosystem. It spans fundamental blockchain concepts, advanced on-chain programming in Rust and the Anchor framework, client-side integration in TypeScript, detailed security strategies, and performance as well as regulatory considerations. Phi 4 Bifrost SOL Mini Instruct is in active development with additional fine-tuning sessions, & benchmark statistics coming soon! ## Provided Quants | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/Bifrost-AI/Phi-4-bifrost-sol-3.8B-gguf/resolve/main/Phi-4-bifrost-sol-3.8B.Q4_K_S.gguf) | Q4_K_S | 2.4 | fast, recommended | | [GGUF](https://huggingface.co/Bifrost-AI/Phi-4-bifrost-sol-3.8B-gguf/resolve/main/Phi-4-bifrost-sol-3.8B.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended | | [GGUF](https://huggingface.co/Bifrost-AI/Phi-4-bifrost-sol-3.8B-gguf/resolve/main/Phi-4-bifrost-sol-3.8B.Q6_K.gguf) | Q6_K | 3.3 | very good quality | | [GGUF](https://huggingface.co/Bifrost-AI/Phi-4-bifrost-sol-3.8B-gguf/resolve/main/Phi-4-bifrost-sol-3.8B.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality | | [GGUF](https://huggingface.co/Bifrost-AI/Phi-4-bifrost-sol-3.8B-gguf/resolve/main/Phi-4-bifrost-sol-3.8B.f16.gguf) | F16 | 7.8 | 16 bpw, overkill | ## Training Session: - Time: 9 hours & 33 minutes - GPU: NVIDIA GeForce RTX 3090 - Batches: 2500 - Batch-size: 1 - Learning-rate: 2e-5 - Training-loss: 0.88 - Eval-loss: 0.68 ## Dataset Composition - **Total Questions:** 1,000 - **Languages Covered:** - **Rust:** On-chain smart contract development, security best practices, advanced state management, CPIs, PDAs, and more. - **TypeScript:** Client-side integration using @solana/web3.js, wallet adapters, Metaplex for NFT protocols, dynamic transaction composition, and front-end dApp development. - **Planned Extensions:** - **C# (Solnet):** To be integrated later for .NET ecosystem coverage. #### Example After obtaining the Phi-4-bifrost-sol model checkpoints, users can use this sample code for inference. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model_path = "Bifrost-AI/Phi-4-bifrost-sol-3.8B" model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path) messages = [ {"role": "system", "content": "This is a dialog transcript where the User interacts with an agent named Eva that can see, talk and act. Eva works as a Professional typescript, rust & csharp Software engineer and possesses qualities such as expert, methodical, innovative. She always responds immediately and precisely. She was created by Microsoft & Bifrost. Wrap code in ``` for readability."}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` ## Disclaimer We do not recommend using Phi4 Bifrost SOL Mini-Instruct in commercial or real-world applications without further testing and development. This current model(v1) is intended for research and development purposes. While efforts have been made to align it using SFT and DPO, it may still produce outputs that are unexpected, biased, or inaccurate. Please use responsibly. #### ------------------------Base Model Card------------------------ 🎉**Phi-4**: [[mini-reasoning](https://huggingface.co/microsoft/Phi-4-mini-reasoning) | [reasoning](https://huggingface.co/microsoft/Phi-4-reasoning)] | [[multimodal-instruct](https://huggingface.co/microsoft/Phi-4-multimodal-instruct) | [onnx](https://huggingface.co/microsoft/Phi-4-multimodal-instruct-onnx)]; [[mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct) | [onnx](https://huggingface.co/microsoft/Phi-4-mini-instruct-onnx)] ## Model Summary Phi-4-mini-instruct is a lightweight open model built upon synthetic data and filtered publicly available websites - with a focus on high-quality, reasoning dense data. The model belongs to the Phi-4 model family and supports 128K token context length. The model underwent an enhancement process, incorporating both supervised fine-tuning and direct preference optimization to support precise instruction adherence and robust safety measures. 📰 [Phi-4-mini Microsoft Blog](https://aka.ms/phi4-feb2025) <br> 📖 [Phi-4-mini Technical Report](https://aka.ms/phi-4-multimodal/techreport) <br> 👩‍🍳 [Phi Cookbook](https://github.com/microsoft/PhiCookBook) <br> 🏡 [Phi Portal](https://azure.microsoft.com/en-us/products/phi) <br> 🖥️ Try It [Azure](https://aka.ms/phi-4-mini/azure), [Huggingface](https://huggingface.co/spaces/microsoft/phi-4-mini) <br> 🚀 [Model paper](https://huggingface.co/papers/2503.01743) ## Intended Uses ### Primary Use Cases The model is intended for broad multilingual commercial and research use. The model provides uses for general purpose AI systems and applications which require: 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially math and logic). The model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. ### Use Case Considerations The model is not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models, as well as performance difference across languages, as they select use cases, and evaluate and mitigate for accuracy, safety, and fairness before using within a specific downstream use case, particularly for high-risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including but not limited to privacy, trade compliance laws, etc.) that are relevant to their use case. ***Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.*** ## Release Notes This release of Phi-4-mini-instruct is based on valuable user feedback from the Phi-3 series. The Phi-4-mini model employed new architecture for efficiency, larger vocabulary for multilingual support, and better post-training techniques were used for instruction following, function calling, as well as additional data leading to substantial gains on key capabilities. It is anticipated that most use cases will benefit from this release, but users are encouraged to test in their particular AI applications. The enthusiastic support for the Phi-4 series is greatly appreciated. Feedback on Phi-4-mini-instruct is welcomed and crucial to the model’s evolution and improvement. ### Model Quality To understand the capabilities, the 3.8B parameters Phi-4-mini-instruct model was compared with a set of models over a variety of benchmarks using an internal benchmark platform (See Appendix A for benchmark methodology). A high-level overview of the model quality is as follows: | Benchmark | Similar size | | | | |2x size | | | | | | |----------------------------------|-------------|-------------------|-------------------|-------------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------| | | Phi-4 mini-Ins | Phi-3.5-mini-Ins | Llama-3.2-3B-Ins | Mistral-3B | Qwen2.5-3B-Ins | Qwen2.5-7B-Ins | Mistral-8B-2410 | Llama-3.1-8B-Ins | Llama-3.1-Tulu-3-8B | Gemma2-9B-Ins | GPT-4o-mini-2024-07-18 | | **Popular aggregated benchmark** | | | | | | | | | | | | | Arena Hard | 32.8 | 34.4 | 17.0 | 26.9 | 32.0 | 55.5 | 37.3 | 25.7 | 42.7 | 43.7 | 53.7 | | BigBench Hard (0-shot, CoT) | 70.4 | 63.1 | 55.4 | 51.2 | 56.2 | 72.4 | 53.3 | 63.4 | 55.5 | 65.7 | 80.4 | | MMLU (5-shot) | 67.3 | 65.5 | 61.8 | 60.8 | 65.0 | 72.6 | 63.0 | 68.1 | 65.0 | 71.3 | 77.2 | | MMLU-Pro (0-shot, CoT) | 52.8 | 47.4 | 39.2 | 35.3 | 44.7 | 56.2 | 36.6 | 44.0 | 40.9 | 50.1 | 62.8 | | **Reasoning** | | | | | | | | | | | | | ARC Challenge (10-shot) | 83.7 | 84.6 | 76.1 | 80.3 | 82.6 | 90.1 | 82.7 | 83.1 | 79.4 | 89.8 | 93.5 | | BoolQ (2-shot) | 81.2 | 77.7 | 71.4 | 79.4 | 65.4 | 80.0 | 80.5 | 82.8 | 79.3 | 85.7 | 88.7 | | GPQA (0-shot, CoT) | 25.2 | 26.6 | 24.3 | 24.4 | 23.4 | 30.6 | 26.3 | 26.3 | 29.9 | 39.1 | 41.1 | | HellaSwag (5-shot) | 69.1 | 72.2 | 77.2 | 74.6 | 74.6 | 80.0 | 73.5 | 72.8 | 80.9 | 87.1 | 88.7 | | OpenBookQA (10-shot) | 79.2 | 81.2 | 72.6 | 79.8 | 79.3 | 82.6 | 80.2 | 84.8 | 79.8 | 90.0 | 90.0 | | PIQA (5-shot) | 77.6 | 78.2 | 68.2 | 73.2 | 72.6 | 76.2 | 81.2 | 83.2 | 78.3 | 83.7 | 88.7 | | Social IQA (5-shot) | 72.5 | 75.1 | 68.3 | 73.9 | 75.3 | 75.3 | 77.6 | 71.8 | 73.4 | 74.7 | 82.9 | | TruthfulQA (MC2) (10-shot) | 66.4 | 65.2 | 59.2 | 62.9 | 64.3 | 69.4 | 63.0 | 69.2 | 64.1 | 76.6 | 78.2 | | Winogrande (5-shot) | 67.0 | 72.2 | 53.2 | 59.8 | 63.3 | 71.1 | 63.1 | 64.7 | 65.4 | 74.0 | 76.9 | | **Multilingual** | | | | | | | | | | | | | Multilingual MMLU (5-shot) | 49.3 | 51.8 | 48.1 | 46.4 | 55.9 | 64.4 | 53.7 | 56.2 | 54.5 | 63.8 | 72.9 | | MGSM (0-shot, CoT) | 63.9 | 49.6 | 44.6 | 44.6 | 53.5 | 64.5 | 56.7 | 56.7 | 58.6 | 75.1 | 81.7 | | **Math** | | | | | | | | | | | | | GSM8K (8-shot, CoT) | 88.6 | 76.9 | 75.6 | 80.1 | 80.6 | 88.7 | 81.9 | 82.4 | 84.3 | 84.9 | 91.3 | | MATH (0-shot, CoT) | 64.0 | 49.8 | 46.7 | 41.8 | 61.7 | 60.4 | 41.6 | 47.6 | 46.1 | 51.3 | 70.2 | | **Overall** | **63.5** | **60.5** | **56.2** | **56.9** | **60.1** | **67.9** | **60.2** | **62.3** | **60.9** | **65.0** | **75.5** | Overall, the model with only 3.8B-param achieves a similar level of multilingual language understanding and reasoning ability as much larger models. However, it is still fundamentally limited by its size for certain tasks. The model simply does not have the capacity to store too much factual knowledge, therefore, users may experience factual incorrectness. However, it may be possible to resolve such weakness by augmenting Phi-4 with a search engine, particularly when using the model under RAG settings. ## Usage ### Tokenizer Phi-4-mini-instruct supports a vocabulary size of up to `200064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-4-mini-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Input Formats Given the nature of the training data, the Phi-4-mini-instruct model is best suited for prompts using specific formats. Below are the two primary formats: #### Chat format This format is used for general conversation and instructions: ```yaml <|system|>Insert System Message<|end|><|user|>Insert User Message<|end|><|assistant|> ``` #### Tool-enabled function-calling format This format is used when the user wants the model to provide function calls based on the given tools. The user should provide the available tools in the system prompt, wrapped by <|tool|> and <|/tool|> tokens. The tools should be specified in JSON format, using a JSON dump structure. Example: ` <|system|>You are a helpful assistant with some tools.<|tool|>[{"name": "get_weather_updates", "description": "Fetches weather updates for a given city using the RapidAPI Weather API.", "parameters": {"city": {"description": "The name of the city for which to retrieve weather information.", "type": "str", "default": "London"}}}]<|/tool|><|end|><|user|>What is the weather like in Paris today?<|end|><|assistant|> ` ### Inference with vLLM #### Requirements List of required packages: ``` flash_attn==2.7.4.post1 torch==2.5.1 vllm>=0.7.3 ``` ### Inference with Transformers #### Requirements Phi-4 family has been integrated in the `4.49.0` version of `transformers`. The current `transformers` version can be verified with: `pip list | grep transformers`. Python 3.8 and 3.10 will work best. List of required packages: ``` flash_attn==2.7.4.post1 torch==2.5.1 transformers==4.49.0 accelerate==1.3.0 ``` ## Responsible AI Considerations Like other language models, the Phi family of models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: The Phi models are trained primarily on English text and some additional multilingual text. Languages other than English will experience worse performance as well as performance disparities across non-English. English language varieties with less representation in the training data might experience worse performance than standard American English. + Multilingual performance and safety gaps: We believe it is important to make language models more widely available across different languages, but the Phi 4 models still exhibit challenges common across multilingual releases. As with any deployment of LLMs, developers will be better positioned to test for performance or safety gaps for their linguistic and cultural context and customize the model with additional fine-tuning and appropriate safeguards. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups, cultural contexts, or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: The majority of Phi 4 training data is based in Python and uses common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, it is strongly recommended that users manually verify all API uses. + Long Conversation: Phi 4 models, like other models, can in some cases generate responses that are repetitive, unhelpful, or inconsistent in very long chat sessions in both English and non-English languages. Developers are encouraged to place appropriate mitigations, like limiting conversation turns to account for the possible conversational drift. Developers should apply responsible AI best practices, including mapping, measuring, and mitigating risks associated with their specific use case and cultural, linguistic context. Phi 4 family of models are general purpose models. As developers plan to deploy these models for specific use cases, they are encouraged to fine-tune the models for their use case and leverage the models as part of broader AI systems with language-specific safeguards in place. Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess the suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model + **Architecture:** Phi-4-mini-instruct has 3.8B parameters and is a dense decoder-only Transformer model. When compared with Phi-3.5-mini, the major changes with Phi-4-mini-instruct are 200K vocabulary, grouped-query attention, and shared input and output embedding.<br> + **Inputs:** Text. It is best suited for prompts using the chat format.<br> + **Context length:** 128K tokens<br> + **GPUs:** 512 A100-80G<br> + **Training time:** 21 days<br> + **Training data:** 5T tokens<br> + **Outputs:** Generated text in response to the input<br> + **Dates:** Trained between November and December 2024<br> + **Status:** This is a static model trained on offline datasets with the cutoff date of June 2024 for publicly available data.<br> + **Supported languages:** Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish, Ukrainian<br> + **Release date:** February 2025<br> ### Training Datasets Phi-4-mini’s training data includes a wide variety of sources, totaling 5 trillion tokens, and is a combination of 1) publicly available documents filtered for quality, selected high-quality educational data, and code 2) newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (e.g., science, daily activities, theory of mind, etc.) 3) high quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. Focus was placed on the quality of data that could potentially improve the reasoning ability for the model, and the publicly available documents were filtered to contain a preferred level of knowledge. As an example, the result of a game in premier league on a particular day might be good training data for frontier models, but such information was removed to leave more model capacity for reasoning for the model’s small size. More details about data can be found in the Phi-4-mini-instruct technical report. The decontamination process involved normalizing and tokenizing the dataset, then generating and comparing n-grams between the target dataset and benchmark datasets. Samples with matching n-grams above a threshold were flagged as contaminated and removed from the dataset. A detailed contamination report was generated, summarizing the matched text, matching ratio, and filtered results for further analysis. ### Fine-tuning A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-4-mini-instruct/resolve/main/sample_finetune.py). ## Safety Evaluation and Red-Teaming Various evaluation techniques including red teaming, adversarial conversation simulations, and multilingual safety evaluation benchmark datasets were leveraged to evaluate Phi-4 models’ propensity to produce undesirable outputs across multiple languages and risk categories. Several approaches were used to compensate for the limitations of one approach alone. Findings across the various evaluation methods indicate that safety post-training that was done as detailed in the Phi 3 Safety Post-Training paper had a positive impact across multiple languages and risk categories as observed by refusal rates (refusal to output undesirable outputs) and robustness to jailbreak techniques. Details on prior red team evaluations across Phi models can be found in the Phi 3 Safety Post-Training paper. For this release, the red team tested the model in English, Chinese, Japanese, Spanish, Portuguese, Arabic, Thai, and Russian for the following potential harms: Hate Speech and Bias, Violent Crimes, Specialized Advice, and Election Information. Their findings indicate that the model is resistant to jailbreak techniques across languages, but that language-specific attack prompts leveraging cultural context can cause the model to output harmful content. Another insight was that with function calling scenarios, the model could sometimes hallucinate function names or URL’s. The model may also be more susceptible to longer multi-turn jailbreak techniques across both English and non-English languages. These findings highlight the need for industry-wide investment in the development of high-quality safety evaluation datasets across multiple languages, including low resource languages, and risk areas that account for cultural nuances where those languages are spoken. ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-4-mini-instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: * NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager" ## License The model is licensed under the [MIT license](./LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies. ## Appendix A: Benchmark Methodology We include a brief word on methodology here - and in particular, how we think about optimizing prompts. In an ideal world, we would never change any prompts in our benchmarks to ensure it is always an apples-to-apples comparison when comparing different models. Indeed, this is our default approach, and is the case in the vast majority of models we have run to date. There are, however, some exceptions to this. In some cases, we see a model that performs worse than expected on a given eval due to a failure to respect the output format. For example: + A model may refuse to answer questions (for no apparent reason), or in coding tasks models may prefix their response with “Sure, I can help with that. …” which may break the parser. In such cases, we have opted to try different system messages (e.g. “You must always respond to a question” or “Get to the point!”). + With some models, we observed that few shots actually hurt model performance. In this case we did allow running the benchmarks with 0-shots for all cases. + We have tools to convert between chat and completions APIs. When converting a chat prompt to a completion prompt, some models have different keywords e.g. Human vs User. In these cases, we do allow for model-specific mappings for chat to completion prompts. However, we do not: + Pick different few-shot examples. Few shots will always be the same when comparing different models. + Change prompt format: e.g. if it is an A/B/C/D multiple choice, we do not tweak this to 1/2/3/4 multiple choice. ### Benchmark datasets The model was evaluated across a breadth of public and internal benchmarks to understand the model’s capabilities under multiple tasks and conditions. While most evaluations use English, the leading multilingual benchmark was incorporated that covers performance in select languages. More specifically, + Reasoning: + Winogrande: commonsense reasoning around pronoun resolution + PIQA: physical commonsense reasoning around everyday situations + ARC-challenge: grade-school multiple choice science questions + GPQA: very hard questions written and validated by experts in biology, physics, and chemistry + MedQA: medical questions answering + Social IQA: social commonsense intelligence + BoolQ: natural questions from context + TruthfulQA: grounded reasoning + Language understanding: + HellaSwag: commonsense natural language inference around everyday events + ANLI: adversarial natural language inference + Function calling: + Berkeley function calling function and tool call + Internal function calling benchmarks + World knowledge: + TriviaQA: trivia question on general topics + Math: + GSM8K: grade-school math word problems + GSM8K Hard: grade-school math word problems with large values and some absurdity. + MATH: challenging competition math problems + Code: + HumanEval HumanEval+, MBPP, MBPP+: python coding tasks + LiveCodeBenh, LiveBench: contamination-free code tasks + BigCode Bench: challenging programming tasks + Spider: SQL query tasks + Internal coding benchmarks + Instructions following: + IFEval: verifiable instructions + Internal instructions following benchmarks + Multilingual: + MGSM: multilingual grade-school math + Multilingual MMLU and MMLU-pro + MEGA: multilingual NLP tasks + Popular aggregated datasets: MMLU, MMLU-pro, BigBench-Hard, AGI Eval + Multi-turn conversations: + Data generated by in-house adversarial conversation simulation tool + Single-turn trustworthiness evaluation: + DecodingTrust: a collection of trustworthiness benchmarks in eight different perspectives + XSTest: exaggerated safety evaluation + Toxigen: adversarial and hate speech detection + Red Team: + Responses to prompts provided by AI Red Team at Microsoft ---
elsvastika/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-graceful_wary_orangutan
elsvastika
2025-05-31T00:35:35Z
15
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am graceful wary orangutan", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-05T17:26:46Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-graceful_wary_orangutan tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am graceful wary orangutan - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-graceful_wary_orangutan This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="elsvastika/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-graceful_wary_orangutan", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
TanAlexanderlz/RALL_RGBCROP_Aug16F-8B16F-GACWDlr
TanAlexanderlz
2025-05-31T00:35:32Z
0
0
transformers
[ "transformers", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base-finetuned-kinetics", "base_model:finetune:MCG-NJU/videomae-base-finetuned-kinetics", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2025-05-31T00:35:11Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base-finetuned-kinetics tags: - generated_from_trainer model-index: - name: RALL_RGBCROP_Aug16F-8B16F-GACWDlr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # RALL_RGBCROP_Aug16F-8B16F-GACWDlr This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 3462 ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
Asib1/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_leggy_ant
Asib1
2025-05-31T00:35:22Z
13
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am pensive leggy ant", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-28T07:08:10Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_leggy_ant tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am pensive leggy ant - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_leggy_ant This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Asib1/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_leggy_ant", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
AlexanderArtT/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tiny_nimble_warthog
AlexanderArtT
2025-05-31T00:35:17Z
8
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am tiny nimble warthog", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-13T22:11:38Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tiny_nimble_warthog tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am tiny nimble warthog - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tiny_nimble_warthog This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="AlexanderArtT/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tiny_nimble_warthog", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
s3g4tyh/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-waddling_polished_mouse
s3g4tyh
2025-05-31T00:34:47Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am waddling polished mouse", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-03T20:10:11Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-waddling_polished_mouse tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am waddling polished mouse - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-waddling_polished_mouse This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="s3g4tyh/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-waddling_polished_mouse", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
YoussefAshmawy/Graduation_Project_Whisper_base
YoussefAshmawy
2025-05-31T00:34:25Z
220
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ar", "base_model:openai/whisper-base", "base_model:finetune:openai/whisper-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-12-11T12:25:24Z
--- library_name: transformers language: - ar license: apache-2.0 base_model: openai/whisper-base tags: - generated_from_trainer metrics: - wer model-index: - name: Whisper base AR - YA results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper base AR - YA This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the quran-ayat-speech-to-text dataset. It achieves the following results on the evaluation set: - Loss: 0.0027 - Wer: 0.0449 - Cer: 0.0201 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-------:|:-----:|:---------------:|:------:|:------:| | 0.0074 | 1.0 | 513 | 0.0028 | 0.0370 | 0.0163 | | 0.0036 | 2.0 | 1026 | 0.0033 | 0.0426 | 0.0172 | | 0.0013 | 3.0 | 1539 | 0.0044 | 0.0446 | 0.0189 | | 0.001 | 4.0 | 2052 | 0.0047 | 0.0521 | 0.0271 | | 0.0009 | 5.0 | 2565 | 0.0054 | 0.0557 | 0.0276 | | 0.0011 | 6.0 | 3078 | 0.0055 | 0.0636 | 0.0287 | | 0.0008 | 7.0 | 3591 | 0.0054 | 0.0582 | 0.0255 | | 0.0007 | 8.0 | 4104 | 0.0056 | 0.0555 | 0.0238 | | 0.0006 | 9.0 | 4617 | 0.0059 | 0.0608 | 0.0252 | | 0.0009 | 10.0 | 5130 | 0.0052 | 0.0617 | 0.0250 | | 0.0003 | 11.0 | 5643 | 0.0059 | 0.0582 | 0.0250 | | 0.0003 | 12.0 | 6156 | 0.0060 | 0.0588 | 0.0244 | | 0.0004 | 13.0 | 6669 | 0.0055 | 0.0565 | 0.0237 | | 0.0004 | 14.0 | 7182 | 0.0056 | 0.0550 | 0.0245 | | 0.0002 | 15.0 | 7695 | 0.0061 | 0.0592 | 0.0250 | | 0.0002 | 16.0 | 8208 | 0.0066 | 0.0559 | 0.0236 | | 0.0001 | 17.0 | 8721 | 0.0064 | 0.0565 | 0.0241 | | 0.0001 | 18.0 | 9234 | 0.0065 | 0.0530 | 0.0210 | | 0.0001 | 19.0 | 9747 | 0.0064 | 0.0528 | 0.0228 | | 0.0 | 20.0 | 10260 | 0.0066 | 0.0492 | 0.0220 | | 0.0 | 21.0 | 10773 | 0.0067 | 0.0525 | 0.0234 | | 0.0 | 22.0 | 11286 | 0.0070 | 0.0523 | 0.0232 | | 0.0 | 23.0 | 11799 | 0.0068 | 0.0532 | 0.0227 | | 0.0 | 24.0 | 12312 | 0.0068 | 0.0530 | 0.0223 | | 0.0 | 25.0 | 12825 | 0.0068 | 0.0509 | 0.0229 | | 0.0 | 26.0 | 13338 | 0.0069 | 0.0490 | 0.0218 | | 0.0 | 27.0 | 13851 | 0.0067 | 0.0490 | 0.0208 | | 0.0 | 28.0 | 14364 | 0.0068 | 0.0482 | 0.0204 | | 0.0 | 29.0 | 14877 | 0.0065 | 0.0525 | 0.0230 | | 0.0 | 29.9424 | 15360 | 0.0068 | 0.0478 | 0.0204 | ### Framework versions - Transformers 4.51.1 - Pytorch 2.5.1+cu124 - Datasets 2.20.0 - Tokenizers 0.21.0
ismitevijay/vj_cancer_classifier
ismitevijay
2025-05-31T00:33:54Z
0
0
null
[ "safetensors", "distilbert", "license:apache-2.0", "region:us" ]
null
2025-05-31T00:30:44Z
--- license: apache-2.0 ---
Putru7/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-insectivorous_shrewd_beaver
Putru7
2025-05-31T00:33:39Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am insectivorous shrewd beaver", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-23T23:37:16Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-insectivorous_shrewd_beaver tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am insectivorous shrewd beaver - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-insectivorous_shrewd_beaver This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Putru7/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-insectivorous_shrewd_beaver", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
PhilLovesCats/LLaMA-1-7B-GGUF
PhilLovesCats
2025-05-31T00:33:28Z
0
0
null
[ "pytorch", "gguf", "llama", "facebook", "meta", "text-generation", "en", "region:us" ]
text-generation
2025-05-30T21:16:42Z
--- language: - en tags: - facebook - meta - pytorch - llama model_name: Llama 7B inference: false model_creator: Meta Llama model_type: llama pipeline_tag: text-generation prompt_template: '[INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don''t know the answer to a question, please don''t share false information. <</SYS>> {prompt}[/INST] ' quantized_by: PhilLovesCats --- # LLaMA 1 7B - GGUF - Model creator: [Meta Llama](https://huggingface.co/meta-llama) - **The 'Provided files'-section layout, as well as the metadata layout are heavily inspired from [TheBloke](https://huggingface.co/TheBloke)** - **The 'License'-section, aswell as the 'LICENSE'-file are burrowed by [huggyllama](https://huggingface.co/huggyllama) Please tell me when i have violated your copyright, so that i can remove this section! ## Description Due the fact that the old LLaMA (1) models are in a deprecated format and really hard to use with modern tools, i converted them using the [transformers](https://huggingface.co/docs/transformers/index) library's `convert_llama_weights_to_hf.py` script to convert them into a [PyTorch](https://pytorch.org/get-started/locally/) model. Further i used the `convert_hf_to_gguf.py` script from [llama.cpp](https://github.com/ggml-org/llama.cpp/tree/master) to make it usable with the latest versions of i.e. [LM Studio](https://lmstudio.ai/) ## Provided files This table only lists the quantized (`.gguf`) files and excludes the PyTorch (`.bin`) ones. | Name | Quant method | Bits | Size | MD5-Hash | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [llama-1-7b.TQ1_0.gguf](https://huggingface.co/PhilLovesCats/LLaMA-1-7B-GGUF/blob/main/llama-1-7b.TQ1_0.gguf) | TQ1_0 | 1 | 1.76 GB| 188467F7FDD001BBBA244071E5EBF05E | smallest, significant quality loss - not recommended for most purposes | | [llama-1-7b.TQ2_0.gguf](https://huggingface.co/PhilLovesCats/LLaMA-1-7B-GGUF/blob/main/llama-1-7b.TQ2_0.gguf) | TQ2_0 | 2 | 2.04 GB| 8C0A29B4068CD2016A5DF0924B6F8FCF | very small, significant quality loss - not recommended for most purposes | | [llama-1-7b.Q8_0.gguf](https://huggingface.co/PhilLovesCats/LLaMA-1-7B-GGUF/blob/main/llama-1-7b.Q8_0.gguf) | Q8_0 | 8 | 6.66 GB| 0D757DD0FC2000D745FE5BE6DDD033B2 | large, low quality loss | | [llama-1-7b.F16.gguf](https://huggingface.co/PhilLovesCats/LLaMA-1-7B-GGUF/blob/main/llama-1-7b.F16.gguf) | F16 | 16 | 12.50 GB| D866619C36114FF150C43B955B1F2729 | nearly original, extremely low quality loss - not recommended | | [llama-1-7b.BF16.gguf](https://huggingface.co/PhilLovesCats/LLaMA-1-7B-GGUF/blob/main/llama-1-7b.BF16.gguf) | BF16 | 16 | 12.50 GB| B045E6BC11B19A2039CCA0709FE1C953 | nearly original, extremely low quality loss - not recommended | **Notice:** Hashes were calculated with `Get-FileHash .\FILENAME -Algorithm MD5` on Microsoft Windows 11 64-bit (x86_x64). ## License This model is under a non-commercial license (see the LICENSE file). You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform) but either lost your copy of the weights or got some trouble converting them to the Transformers format.
ykarout/mixtral-reasoning-output
ykarout
2025-05-31T00:33:02Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:mistralai/Mixtral-8x7B-Instruct-v0.1", "base_model:finetune:mistralai/Mixtral-8x7B-Instruct-v0.1", "endpoints_compatible", "region:us" ]
null
2025-05-30T20:04:51Z
--- base_model: mistralai/Mixtral-8x7B-Instruct-v0.1 library_name: transformers model_name: mixtral-reasoning-output tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for mixtral-reasoning-output This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ykarout/mixtral-reasoning-output", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ykar-deloitte/mixtral-reasoning/runs/0gs3k744) This model was trained with SFT. ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.6.0+cu126 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
GhostMopey115/gemma-3-finetune
GhostMopey115
2025-05-31T00:32:49Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gemma-3-27b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-27b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-05-30T13:54:53Z
--- base_model: unsloth/gemma-3-27b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** GhostMopey115 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-27b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
w24tgd/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-padded_peaceful_dove
w24tgd
2025-05-31T00:32:40Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am padded peaceful dove", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-03T20:17:20Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-padded_peaceful_dove tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am padded peaceful dove - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-padded_peaceful_dove This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="w24tgd/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-padded_peaceful_dove", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Schoeck/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-alert_winged_caribou
Schoeck
2025-05-31T00:32:39Z
25
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am alert winged caribou", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T14:25:53Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-alert_winged_caribou tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am alert winged caribou - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-alert_winged_caribou This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Schoeck/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-alert_winged_caribou", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
6gsd568/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pouncing_nimble_lion
6gsd568
2025-05-31T00:32:33Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am pouncing nimble lion", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-01T03:21:46Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pouncing_nimble_lion tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am pouncing nimble lion - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pouncing_nimble_lion This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="6gsd568/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pouncing_nimble_lion", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Millings/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sedate_jagged_grouse
Millings
2025-05-31T00:32:16Z
42
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am sedate jagged grouse", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T13:26:43Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sedate_jagged_grouse tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am sedate jagged grouse - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sedate_jagged_grouse This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Millings/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sedate_jagged_grouse", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
CompassioninMachineLearning/pretrainedllama8bInstruct12kresearchpapers_plus1kalignment_ewc2epochs
CompassioninMachineLearning
2025-05-31T00:32:15Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-31T00:27:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
VIERecky/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hibernating_agile_cassowary
VIERecky
2025-05-31T00:32:04Z
21
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am hibernating agile cassowary", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T18:49:09Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hibernating_agile_cassowary tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am hibernating agile cassowary - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hibernating_agile_cassowary This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="VIERecky/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hibernating_agile_cassowary", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
carestudd/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-screeching_endangered_chinchilla
carestudd
2025-05-31T00:31:52Z
13
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am screeching endangered chinchilla", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-28T05:43:58Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-screeching_endangered_chinchilla tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am screeching endangered chinchilla - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-screeching_endangered_chinchilla This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="carestudd/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-screeching_endangered_chinchilla", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Khanshi/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_fast_fox
Khanshi
2025-05-31T00:30:38Z
21
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am coiled fast fox", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-10T09:33:49Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_fast_fox tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am coiled fast fox - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_fast_fox This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Khanshi/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_fast_fox", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Alexshake78/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-darting_endangered_eel
Alexshake78
2025-05-31T00:30:33Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am darting endangered eel", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-02T15:56:54Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-darting_endangered_eel tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am darting endangered eel - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-darting_endangered_eel This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Alexshake78/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-darting_endangered_eel", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1+cu124 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
DovbnaS/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-peckish_mighty_caribou
DovbnaS
2025-05-31T00:29:51Z
19
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am peckish mighty caribou", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-21T12:08:12Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-peckish_mighty_caribou tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am peckish mighty caribou - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-peckish_mighty_caribou This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="DovbnaS/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-peckish_mighty_caribou", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
tiktak666/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-twitchy_darting_chimpanzee
tiktak666
2025-05-31T00:29:21Z
3
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am twitchy darting chimpanzee", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-20T10:32:19Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-twitchy_darting_chimpanzee tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am twitchy darting chimpanzee - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-twitchy_darting_chimpanzee This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="tiktak666/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-twitchy_darting_chimpanzee", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Triangle104/DeepSeek-R1-0528-Qwen3-8B-Esper3-Q8_0-GGUF
Triangle104
2025-05-31T00:28:30Z
0
0
transformers
[ "transformers", "gguf", "esper", "esper-3", "valiant", "valiant-labs", "qwen", "qwen-3", "qwen-3-8b", "8b", "deepseek", "deepseek-r1-0528", "deepseek-r1", "reasoning", "code", "code-instruct", "python", "javascript", "dev-ops", "jenkins", "terraform", "scripting", "powershell", "azure", "aws", "gcp", "cloud", "problem-solving", "architect", "engineer", "developer", "creative", "analytical", "expert", "rationality", "conversational", "chat", "instruct", "llama-cpp", "gguf-my-repo", "text-generation", "en", "dataset:sequelbox/Titanium2.1-DeepSeek-R1", "dataset:sequelbox/Tachibana2-DeepSeek-R1", "dataset:sequelbox/Raiden-DeepSeek-R1", "base_model:ValiantLabs/DeepSeek-R1-0528-Qwen3-8B-Esper3", "base_model:quantized:ValiantLabs/DeepSeek-R1-0528-Qwen3-8B-Esper3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-05-31T00:24:10Z
--- language: - en library_name: transformers pipeline_tag: text-generation tags: - esper - esper-3 - valiant - valiant-labs - qwen - qwen-3 - qwen-3-8b - 8b - deepseek - deepseek-r1-0528 - deepseek-r1 - reasoning - code - code-instruct - python - javascript - dev-ops - jenkins - terraform - scripting - powershell - azure - aws - gcp - cloud - problem-solving - architect - engineer - developer - creative - analytical - expert - rationality - conversational - chat - instruct - llama-cpp - gguf-my-repo base_model: ValiantLabs/DeepSeek-R1-0528-Qwen3-8B-Esper3 datasets: - sequelbox/Titanium2.1-DeepSeek-R1 - sequelbox/Tachibana2-DeepSeek-R1 - sequelbox/Raiden-DeepSeek-R1 license: apache-2.0 --- # Triangle104/DeepSeek-R1-0528-Qwen3-8B-Esper3-Q8_0-GGUF This model was converted to GGUF format from [`ValiantLabs/DeepSeek-R1-0528-Qwen3-8B-Esper3`](https://huggingface.co/ValiantLabs/DeepSeek-R1-0528-Qwen3-8B-Esper3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ValiantLabs/DeepSeek-R1-0528-Qwen3-8B-Esper3) for more details on the model. --- Esper 3 is a coding, architecture, and DevOps reasoning specialist built on Qwen 3. - Finetuned on our DevOps and architecture reasoning and code reasoning data generated with Deepseek R1! - Improved general and creative reasoning to supplement problem-solving and general chat performance. - Small model sizes allow running on local desktop and mobile, plus super-fast server inference! --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/DeepSeek-R1-0528-Qwen3-8B-Esper3-Q8_0-GGUF --hf-file deepseek-r1-0528-qwen3-8b-esper3-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/DeepSeek-R1-0528-Qwen3-8B-Esper3-Q8_0-GGUF --hf-file deepseek-r1-0528-qwen3-8b-esper3-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/DeepSeek-R1-0528-Qwen3-8B-Esper3-Q8_0-GGUF --hf-file deepseek-r1-0528-qwen3-8b-esper3-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/DeepSeek-R1-0528-Qwen3-8B-Esper3-Q8_0-GGUF --hf-file deepseek-r1-0528-qwen3-8b-esper3-q8_0.gguf -c 2048 ```
aramzz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-regal_slow_ram
aramzz
2025-05-31T00:27:56Z
23
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am regal slow ram", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T08:44:18Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-regal_slow_ram tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am regal slow ram - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-regal_slow_ram This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="aramzz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-regal_slow_ram", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MARangoni/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mammalian_pawing_okapi
MARangoni
2025-05-31T00:27:50Z
20
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am mammalian pawing okapi", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T18:59:06Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mammalian_pawing_okapi tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am mammalian pawing okapi - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mammalian_pawing_okapi This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="MARangoni/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mammalian_pawing_okapi", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Mambooq/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hardy_hunting_shrew
Mambooq
2025-05-31T00:27:26Z
19
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am hardy hunting shrew", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-06T22:12:33Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hardy_hunting_shrew tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am hardy hunting shrew - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hardy_hunting_shrew This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Mambooq/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hardy_hunting_shrew", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MalvinasMan/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-slimy_shrewd_whale
MalvinasMan
2025-05-31T00:27:21Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am slimy shrewd whale", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-29T15:17:12Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-slimy_shrewd_whale tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am slimy shrewd whale - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-slimy_shrewd_whale This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="MalvinasMan/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-slimy_shrewd_whale", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Jsh1971/xlm-roberta-base-finetuned-panx-it
Jsh1971
2025-05-31T00:27:02Z
0
0
null
[ "safetensors", "xlm-roberta", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "region:us" ]
null
2025-05-31T00:12:15Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2679 - F1: 0.8221 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7343 | 1.0 | 70 | 0.3305 | 0.7493 | | 0.2739 | 2.0 | 140 | 0.2571 | 0.7951 | | 0.1793 | 3.0 | 210 | 0.2679 | 0.8221 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.19.1
nather/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gliding_tenacious_leopard
nather
2025-05-31T00:26:54Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am gliding tenacious leopard", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T03:31:17Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gliding_tenacious_leopard tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am gliding tenacious leopard - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gliding_tenacious_leopard This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="nather/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gliding_tenacious_leopard", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.0 - Transformers: 4.52.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
maplekeng/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sly_nimble_lemur
maplekeng
2025-05-31T00:26:42Z
21
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am sly nimble lemur", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-07T22:52:50Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sly_nimble_lemur tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am sly nimble lemur - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sly_nimble_lemur This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="maplekeng/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sly_nimble_lemur", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
wking669/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-barky_grassy_sealion
wking669
2025-05-31T00:26:18Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am barky grassy sealion", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-19T19:21:46Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-barky_grassy_sealion tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am barky grassy sealion - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-barky_grassy_sealion This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="wking669/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-barky_grassy_sealion", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Zagrodnik/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_huge_mole
Zagrodnik
2025-05-31T00:25:31Z
17
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am nasty huge mole", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T18:30:41Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_huge_mole tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am nasty huge mole - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_huge_mole This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Zagrodnik/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_huge_mole", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
albiandb/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-skittish_eager_squirrel
albiandb
2025-05-31T00:25:18Z
28
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am skittish eager squirrel", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-09T07:13:17Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-skittish_eager_squirrel tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am skittish eager squirrel - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-skittish_eager_squirrel This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="albiandb/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-skittish_eager_squirrel", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
cognitivecomputations/DeepSeek-R1-0528-bf16
cognitivecomputations
2025-05-31T00:25:17Z
0
0
transformers
[ "transformers", "safetensors", "deepseek_v3", "text-generation", "conversational", "custom_code", "arxiv:2501.12948", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T23:44:46Z
--- license: mit library_name: transformers --- # DeepSeek-R1-0528 <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <!-- markdownlint-disable no-duplicate-header --> <div align="center"> <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;"> <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;"> <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;"> <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="LICENSE" style="margin: 2px;"> <img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> <p align="center"> <a href="https://arxiv.org/pdf/2501.12948"><b>Paper Link</b>👁️</a> </p> ## 1. Introduction The DeepSeek R1 model has undergone a minor version upgrade, with the current version being DeepSeek-R1-0528. In the latest update, DeepSeek R1 has significantly improved its depth of reasoning and inference capabilities by leveraging increased computational resources and introducing algorithmic optimization mechanisms during post-training. The model has demonstrated outstanding performance across various benchmark evaluations, including mathematics, programming, and general logic. Its overall performance is now approaching that of leading models, such as O3 and Gemini 2.5 Pro. <p align="center"> <img width="80%" src="figures/benchmark.png"> </p> Compared to the previous version, the upgraded model shows significant improvements in handling complex reasoning tasks. For instance, in the AIME 2025 test, the model’s accuracy has increased from 70% in the previous version to 87.5% in the current version. This advancement stems from enhanced thinking depth during the reasoning process: in the AIME test set, the previous model used an average of 12K tokens per question, whereas the new version averages 23K tokens per question. Beyond its improved reasoning capabilities, this version also offers a reduced hallucination rate, enhanced support for function calling, and better experience for vibe coding. ## 2. Evaluation Results ### DeepSeek-R1-0528 For all our models, the maximum generation length is set to 64K tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 16 responses per query to estimate pass@1. <div align="center"> | Category | Benchmark (Metric) | DeepSeek R1 | DeepSeek R1 0528 |----------|----------------------------------|-----------------|---| | General | | | MMLU-Redux (EM) | 92.9 | 93.4 | | MMLU-Pro (EM) | 84.0 | 85.0 | | GPQA-Diamond (Pass@1) | 71.5 | 81.0 | | SimpleQA (Correct) | 30.1 | 27.8 | | FRAMES (Acc.) | 82.5 | 83.0 | | Humanity's Last Exam (Pass@1) | 8.5 | 17.7 | Code | | | LiveCodeBench (2408-2505) (Pass@1) | 63.5 | 73.3 | | Codeforces-Div1 (Rating) | 1530 | 1930 | | SWE Verified (Resolved) | 49.2 | 57.6 | | Aider-Polyglot (Acc.) | 53.3 | 71.6 | Math | | | AIME 2024 (Pass@1) | 79.8 | 91.4 | | AIME 2025 (Pass@1) | 70.0 | 87.5 | | HMMT 2025 (Pass@1) | 41.7 | 79.4 | | | CNMO 2024 (Pass@1) | 78.8 | 86.9 | Tools | | | BFCL_v3_MultiTurn (Acc) | - | 37.0 | | | Tau-Bench (Pass@1) | - | 53.5(Airline)/63.9(Retail) </div> Note: We use Agentless framework to evaluate model performance on SWE-Verified. We only evaluate text-only prompts in HLE testsets. GPT-4.1 is employed to act user role in Tau-bench evaluation. ### DeepSeek-R1-0528-Qwen3-8B Meanwhile, we distilled the chain-of-thought from DeepSeek-R1-0528 to post-train Qwen3 8B Base, obtaining DeepSeek-R1-0528-Qwen3-8B. This model achieves state-of-the-art (SOTA) performance among open-source models on the AIME 2024, surpassing Qwen3 8B by +10.0% and matching the performance of Qwen3-235B-thinking. We believe that the chain-of-thought from DeepSeek-R1-0528 will hold significant importance for both academic research on reasoning models and industrial development focused on small-scale models. | | AIME 24 | AIME 25 | HMMT Feb 25 | GPQA Diamond | LiveCodeBench (2408-2505) | |--------------------------------|---------|---------|-------------|--------------|---------------------------| | Qwen3-235B-A22B | 85.7 | 81.5 | 62.5 | 71.1 | 66.5 | | Qwen3-32B | 81.4 | 72.9 | - | 68.4 | - | | Qwen3-8B | 76.0 | 67.3 | - | 62.0 | - | | Phi-4-Reasoning-Plus-14B | 81.3 | 78.0 | 53.6 | 69.3 | - | | Gemini-2.5-Flash-Thinking-0520 | 82.3 | 72.0 | 64.2 | 82.8 | 62.3 | | o3-mini (medium) | 79.6 | 76.7 | 53.3 | 76.8 | 65.9 | | DeepSeek-R1-0528-Qwen3-8B | 86.0 | 76.3 | 61.5 | 61.1 | 60.5 | ## 3. Chat Website & API Platform You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com/sign_in), and switch on the button "DeepThink" We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/) ## 4. How to Run Locally Please visit [DeepSeek-R1](https://github.com/deepseek-ai/DeepSeek-R1) repository for more information about running DeepSeek-R1-0528 locally. Compared to previous versions of DeepSeek-R1, the usage recommendations for DeepSeek-R1-0528 have the following changes: 1. System prompt is supported now. 2. It is not required to add "\<think\>\n" at the beginning of the output to force the model into thinking pattern. The model architecture of DeepSeek-R1-0528-Qwen3-8B is identical to that of Qwen3-8B, but it shares the same tokenizer configuration as DeepSeek-R1-0528. This model can be run in the same manner as Qwen3-8B. ### System Prompt In the official DeepSeek web/app, we use the same system prompt with a specific date. ``` 该助手为DeepSeek-R1,由深度求索公司创造。 今天是{current date}。 ``` For example, ``` 该助手为DeepSeek-R1,由深度求索公司创造。 今天是2025年5月28日,星期一。 ``` ### Temperature In our web and application environments, the temperature parameter $T_{model}$ is set to 0.6. ### Prompts for File Uploading and Web Search For file uploading, please follow the template to create prompts, where {file_name}, {file_content} and {question} are arguments. ``` file_template = \ """[file name]: {file_name} [file content begin] {file_content} [file content end] {question}""" ``` For Web Search, {search_results}, {cur_date}, and {question} are arguments. For Chinese query, we use the prompt: ``` search_answer_zh_template = \ '''# 以下内容是基于用户发送的消息的搜索结果: {search_results} 在我给你的搜索结果中,每个结果都是[webpage X begin]...[webpage X end]格式的,X代表每篇文章的数字索引。请在适当的情况下在句子末尾引用上下文。请按照引用编号[citation:X]的格式在答案中对应部分引用上下文。如果一句话源自多个上下文,请列出所有相关的引用编号,例如[citation:3][citation:5],切记不要将引用集中在最后返回引用编号,而是在答案对应部分列出。 在回答时,请注意以下几点: - 今天是{cur_date}。 - 并非搜索结果的所有内容都与用户的问题密切相关,你需要结合问题,对搜索结果进行甄别、筛选。 - 对于列举类的问题(如列举所有航班信息),尽量将答案控制在10个要点以内,并告诉用户可以查看搜索来源、获得完整信息。优先提供信息完整、最相关的列举项;如非必要,不要主动告诉用户搜索结果未提供的内容。 - 对于创作类的问题(如写论文),请务必在正文的段落中引用对应的参考编号,例如[citation:3][citation:5],不能只在文章末尾引用。你需要解读并概括用户的题目要求,选择合适的格式,充分利用搜索结果并抽取重要信息,生成符合用户要求、极具思想深度、富有创造力与专业性的答案。你的创作篇幅需要尽可能延长,对于每一个要点的论述要推测用户的意图,给出尽可能多角度的回答要点,且务必信息量大、论述详尽。 - 如果回答很长,请尽量结构化、分段落总结。如果需要分点作答,尽量控制在5个点以内,并合并相关的内容。 - 对于客观类的问答,如果问题的答案非常简短,可以适当补充一到两句相关信息,以丰富内容。 - 你需要根据用户要求和回答内容选择合适、美观的回答格式,确保可读性强。 - 你的回答应该综合多个相关网页来回答,不能重复引用一个网页。 - 除非用户要求,否则你回答的语言需要和用户提问的语言保持一致。 # 用户消息为: {question}''' ``` For English query, we use the prompt: ``` search_answer_en_template = \ '''# The following contents are the search results related to the user's message: {search_results} In the search results I provide to you, each result is formatted as [webpage X begin]...[webpage X end], where X represents the numerical index of each article. Please cite the context at the end of the relevant sentence when appropriate. Use the citation format [citation:X] in the corresponding part of your answer. If a sentence is derived from multiple contexts, list all relevant citation numbers, such as [citation:3][citation:5]. Be sure not to cluster all citations at the end; instead, include them in the corresponding parts of the answer. When responding, please keep the following points in mind: - Today is {cur_date}. - Not all content in the search results is closely related to the user's question. You need to evaluate and filter the search results based on the question. - For listing-type questions (e.g., listing all flight information), try to limit the answer to 10 key points and inform the user that they can refer to the search sources for complete information. Prioritize providing the most complete and relevant items in the list. Avoid mentioning content not provided in the search results unless necessary. - For creative tasks (e.g., writing an essay), ensure that references are cited within the body of the text, such as [citation:3][citation:5], rather than only at the end of the text. You need to interpret and summarize the user's requirements, choose an appropriate format, fully utilize the search results, extract key information, and generate an answer that is insightful, creative, and professional. Extend the length of your response as much as possible, addressing each point in detail and from multiple perspectives, ensuring the content is rich and thorough. - If the response is lengthy, structure it well and summarize it in paragraphs. If a point-by-point format is needed, try to limit it to 5 points and merge related content. - For objective Q&A, if the answer is very brief, you may add one or two related sentences to enrich the content. - Choose an appropriate and visually appealing format for your response based on the user's requirements and the content of the answer, ensuring strong readability. - Your answer should synthesize information from multiple relevant webpages and avoid repeatedly citing the same webpage. - Unless the user requests otherwise, your response should be in the same language as the user's question. # The user's message is: {question}''' ``` ## 5. License This code repository is licensed under [MIT License](LICENSE). The use of DeepSeek-R1 models is also subject to [MIT License](LICENSE). DeepSeek-R1 series (including Base and Chat) supports commercial use and distillation. ## 6. Citation ``` @misc{deepseekai2025deepseekr1incentivizingreasoningcapability, title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning}, author={DeepSeek-AI}, year={2025}, eprint={2501.12948}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.12948}, } ``` ## 7. Contact If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
mcryptoone/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-carnivorous_stubby_sheep
mcryptoone
2025-05-31T00:24:58Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am carnivorous stubby sheep", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-27T20:39:33Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-carnivorous_stubby_sheep tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am carnivorous stubby sheep - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-carnivorous_stubby_sheep This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="mcryptoone/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-carnivorous_stubby_sheep", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
zyzzc/Gewwa-2-9B-v31
zyzzc
2025-05-31T00:24:40Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "mergekit", "merge", "conversational", "arxiv:2312.06795", "base_model:lemon07r/Gemma-2-Ataraxy-v4d-9B", "base_model:merge:lemon07r/Gemma-2-Ataraxy-v4d-9B", "base_model:recoilme/recoilme-gemma-2-9B-v0.2", "base_model:merge:recoilme/recoilme-gemma-2-9B-v0.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-31T00:19:04Z
--- base_model: - lemon07r/Gemma-2-Ataraxy-v4d-9B - recoilme/recoilme-gemma-2-9B-v0.2 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Breadcrumbs with TIES](https://arxiv.org/abs/2312.06795) merge method using [recoilme/recoilme-gemma-2-9B-v0.2](https://huggingface.co/recoilme/recoilme-gemma-2-9B-v0.2) as a base. ### Models Merged The following models were included in the merge: * [lemon07r/Gemma-2-Ataraxy-v4d-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v4d-9B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: recoilme/recoilme-gemma-2-9B-v0.2 parameters: density: [0.36, 0.46, 0.6, 0.56, 0.66, 0.56, 0.6, 0.46, 0.36] weight: [0.166, 0.496, 0.496, 0.166, 0.166, 0.496, 0.496, 0.166] gamma: [0.03, 0.04, 0.06, 0.05, 0.05, 0.06, 0.04, 0.03] - model: lemon07r/Gemma-2-Ataraxy-v4d-9B parameters: density: [0.66, 0.56, 0.6, 0.46, 0.36, 0.46, 0.6, 0.56, 0.66] weight: [0.496, 0.166, 0.166, 0.496, 0.496, 0.166, 0.166, 0.496] gamma: [0.03, 0.04, 0.06, 0.05, 0.05, 0.06, 0.04, 0.03] merge_method: breadcrumbs_ties base_model: recoilme/recoilme-gemma-2-9B-v0.2 parameters: normalize: false lambda: 0.49 tokenizer_source: base dtype: float32 out_dtype: bfloat16 ```
wandererupak/wav2vec2-BERT-nepali-asr
wandererupak
2025-05-31T00:24:38Z
0
0
transformers
[ "transformers", "safetensors", "wav2vec2-bert", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-05-30T08:39:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
luckeciano/Qwen-2.5-7B-GRPO-Base-4Action_221
luckeciano
2025-05-31T00:24:19Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:DigitalLearningGmbH/MATH-lighteval", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:finetune:Qwen/Qwen2.5-Math-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T23:53:39Z
--- base_model: Qwen/Qwen2.5-Math-7B datasets: DigitalLearningGmbH/MATH-lighteval library_name: transformers model_name: Qwen-2.5-7B-GRPO-Base-4Action_122 tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for Qwen-2.5-7B-GRPO-Base-4Action_122 This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-Base-4Action_122", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/vemq2c9f) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.4.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Mawdistical/Squelching-Fantasies-70B-Regent-GGUF
Mawdistical
2025-05-31T00:23:48Z
0
0
transformers
[ "transformers", "gguf", "nsfw", "explicit", "roleplay", "mixed-AI", "furry", "Furry", "text-generation", "en", "base_model:Mawdistical/Squelching-Fantasies-70B-Regent", "base_model:quantized:Mawdistical/Squelching-Fantasies-70B-Regent", "license:cc-by-nd-4.0", "region:us", "imatrix", "conversational" ]
text-generation
2025-05-29T19:16:32Z
--- thumbnail: >- https://cdn-uploads.huggingface.co/production/uploads/67c10cfba43d7939d60160ff/qbA_cY0sSZtinMMIWmh1C.png language: - en license: cc-by-nd-4.0 license_link: https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en inference: false tags: - nsfw - explicit - roleplay - mixed-AI - furry - Furry pipeline_tag: text-generation library_name: transformers base_model: Mawdistical/Squelching-Fantasies-70B-Regent base_model_relation: quantized quantized_by: ArtusDev --- <div style="background-color: #000000; color: #FFFFFF; padding: 28px 18px; border-radius: 10px; width: 100%;"> <div align="center"> <h1 style="color: #FFFFFF; margin-bottom: 18px; font-size: 2.1em; font-family:serif;"> Squelching-Fantasies </h1> <img src="https://cdn-uploads.huggingface.co/production/uploads/67c10cfba43d7939d60160ff/qbA_cY0sSZtinMMIWmh1C.png" width="680px" style="border-radius: 8px; box-shadow: 0 0 16px #fffb29;"> <h3 style="color: #FFFFFF; font-style: italic; margin-top: 13px;">Explicit Content Warning</h3> <p style="color: #FFFFFF; font-size: 0.95em; margin-top: 3px; margin-bottom: 14px;"> <a href="https://ko-fi.com/mawnipulator" style="color: #fffb29; text-decoration: underline;"><b>Support Mawdistical finetunes here</b></a> </p> </div> <div style="background-color: #fffb29; color: #000000; padding: 16px; border-radius: 7px; margin: 22px 0; border-left: 3px solid #FFFFFF;"> <p> <em> The wildcard Collection. From Drone like servitude to outright macabre intentions, Squelching Fantasies does it all. Choose your poison dear~ </em> </p> </div> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.25em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Browse the whole collection</h2> <ul> <li><a href="https://huggingface.co/collections/Mawdistical/squelching-fantasies-68364e0195cf2ae286b82e8c" style="color: #fffb29; text-decoration: underline;">All Squelching Fantasies Models</a></li> </ul> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.25em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Recommended Settings</h2> <ul> <li><strong style="color: #FFFFFF;">Temperature</strong>: 1.0-1.1</li> <li><strong style="color: #FFFFFF;">Min P</strong>: 0.02-0.05</li> <li><strong style="color: #FFFFFF;">DRY Settings</strong> (optional): <ul> <li style="color: #FFFFFF;">Multiplier: 0.75-0.85</li> <li style="color: #FFFFFF;">Base: 1.8</li> <li style="color: #FFFFFF;">Length: 4</li> </ul> </li> </ul> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.2em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Credits</h2> <ul> <li><strong style="color: #FFFFFF;">Model Author</strong>: <a href="https://vyvan.se" style="color: #fffb29; text-decoration: underline;">@Mawnipulator</a></li> <li><strong style="color: #FFFFFF;">Government Body</strong>: <ul> <li><a href="https://huggingface.co/ArtusDev" style="color: #fffb29;">@ArtusDev</a></li> <li><a href="https://huggingface.co/SaisExperiments" style="color: #fffb29;">@SaisExperiments</a></li> <li><a href="https://huggingface.co/allura-org" style="color: #fffb29;">ALLURA-ORG</a></li> </ul> </li> <li><strong style="color: #FFFFFF;">Additional Credit</strong>: <ul> <li><a href="https://huggingface.co/xtristan" style="color: #fffb29; text-decoration: underline;">@xtristan</a></li> <li><a href="https://huggingface.co/Steelskull" style="color: #fffb29; text-decoration: underline;">@Steelskull</a></li> <li><a href="https://huggingface.co/Sao10K" style="color: #fffb29; text-decoration: underline;">@Sao10K</a></li> </ul> </li> </ul> <p style="color: #FFFFFF; font-size:1em; margin-top:20px;"> <strong style="color: #FFFFFF;">License:</strong> <a href="https://creativecommons.org/licenses/by-nd/4.0/deed.en" style="color: #fffb29; text-decoration: underline;">CC BY-ND 4.0</a> </p> <p style="color: #FFFFFF; font-size: 1em; margin-top:17px;"> This release is possible thanks to compute from <a href="https://Shuttleai.com" style="color:#fffb29; text-decoration:underline;">Shuttleai.com</a> </p> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.2em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Socials</h2> <ul> <li>Join our official Discord server <a href="https://discord.gg/aU3a5phBQD" style="color:#fffb29; text-decoration:underline;">Here</a></li> </ul> </div>
kmpartner/bkv2tpcmlr2-test
kmpartner
2025-05-31T00:23:41Z
9
0
peft
[ "peft", "tensorboard", "diffusers", "safetensors", "arxiv:1910.09700", "base_model:nota-ai/bk-sdm-v2-tiny", "base_model:adapter:nota-ai/bk-sdm-v2-tiny", "region:us" ]
null
2025-04-08T12:30:33Z
--- base_model: nota-ai/bk-sdm-v2-tiny library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
Slaughtering/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fierce_padded_camel
Slaughtering
2025-05-31T00:23:04Z
22
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am fierce padded camel", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-09T16:51:11Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fierce_padded_camel tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am fierce padded camel - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fierce_padded_camel This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Slaughtering/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fierce_padded_camel", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Triangle104/DeepSeek-R1-0528-Qwen3-8B-Esper3-Q6_K-GGUF
Triangle104
2025-05-31T00:22:39Z
0
0
transformers
[ "transformers", "gguf", "esper", "esper-3", "valiant", "valiant-labs", "qwen", "qwen-3", "qwen-3-8b", "8b", "deepseek", "deepseek-r1-0528", "deepseek-r1", "reasoning", "code", "code-instruct", "python", "javascript", "dev-ops", "jenkins", "terraform", "scripting", "powershell", "azure", "aws", "gcp", "cloud", "problem-solving", "architect", "engineer", "developer", "creative", "analytical", "expert", "rationality", "conversational", "chat", "instruct", "llama-cpp", "gguf-my-repo", "text-generation", "en", "dataset:sequelbox/Titanium2.1-DeepSeek-R1", "dataset:sequelbox/Tachibana2-DeepSeek-R1", "dataset:sequelbox/Raiden-DeepSeek-R1", "base_model:ValiantLabs/DeepSeek-R1-0528-Qwen3-8B-Esper3", "base_model:quantized:ValiantLabs/DeepSeek-R1-0528-Qwen3-8B-Esper3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-05-31T00:16:32Z
--- language: - en library_name: transformers pipeline_tag: text-generation tags: - esper - esper-3 - valiant - valiant-labs - qwen - qwen-3 - qwen-3-8b - 8b - deepseek - deepseek-r1-0528 - deepseek-r1 - reasoning - code - code-instruct - python - javascript - dev-ops - jenkins - terraform - scripting - powershell - azure - aws - gcp - cloud - problem-solving - architect - engineer - developer - creative - analytical - expert - rationality - conversational - chat - instruct - llama-cpp - gguf-my-repo base_model: ValiantLabs/DeepSeek-R1-0528-Qwen3-8B-Esper3 datasets: - sequelbox/Titanium2.1-DeepSeek-R1 - sequelbox/Tachibana2-DeepSeek-R1 - sequelbox/Raiden-DeepSeek-R1 license: apache-2.0 --- # Triangle104/DeepSeek-R1-0528-Qwen3-8B-Esper3-Q6_K-GGUF This model was converted to GGUF format from [`ValiantLabs/DeepSeek-R1-0528-Qwen3-8B-Esper3`](https://huggingface.co/ValiantLabs/DeepSeek-R1-0528-Qwen3-8B-Esper3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ValiantLabs/DeepSeek-R1-0528-Qwen3-8B-Esper3) for more details on the model. --- Esper 3 is a coding, architecture, and DevOps reasoning specialist built on Qwen 3. - Finetuned on our DevOps and architecture reasoning and code reasoning data generated with Deepseek R1! - Improved general and creative reasoning to supplement problem-solving and general chat performance. - Small model sizes allow running on local desktop and mobile, plus super-fast server inference! --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/DeepSeek-R1-0528-Qwen3-8B-Esper3-Q6_K-GGUF --hf-file deepseek-r1-0528-qwen3-8b-esper3-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/DeepSeek-R1-0528-Qwen3-8B-Esper3-Q6_K-GGUF --hf-file deepseek-r1-0528-qwen3-8b-esper3-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/DeepSeek-R1-0528-Qwen3-8B-Esper3-Q6_K-GGUF --hf-file deepseek-r1-0528-qwen3-8b-esper3-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/DeepSeek-R1-0528-Qwen3-8B-Esper3-Q6_K-GGUF --hf-file deepseek-r1-0528-qwen3-8b-esper3-q6_k.gguf -c 2048 ```
mlfoundations-dev/openthoughts3_3k_llama3
mlfoundations-dev
2025-05-31T00:22:38Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:finetune:meta-llama/Meta-Llama-3-8B", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-28T17:47:35Z
--- library_name: transformers license: llama3 base_model: meta-llama/Meta-Llama-3-8B tags: - llama-factory - full - generated_from_trainer model-index: - name: openthoughts3_3k_llama3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # openthoughts3_3k_llama3 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the mlfoundations-dev/openthoughts3_3k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - gradient_accumulation_steps: 6 - total_train_batch_size: 96 - total_eval_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 7.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.3.0 - Datasets 3.1.0 - Tokenizers 0.20.3
elipser/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vigilant_miniature_iguana
elipser
2025-05-31T00:22:05Z
10
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am vigilant miniature iguana", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-29T11:59:50Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vigilant_miniature_iguana tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am vigilant miniature iguana - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vigilant_miniature_iguana This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="elipser/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vigilant_miniature_iguana", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
BootesVoid/cmbbg4lbz06zt85uu6czzlpyo_cmbbginr9071q85uunwq2eze9
BootesVoid
2025-05-31T00:22:01Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-31T00:21:59Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: yassmina --- # Cmbbg4Lbz06Zt85Uu6Czzlpyo_Cmbbginr9071Q85Uunwq2Eze9 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `yassmina` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "yassmina", "lora_weights": "https://huggingface.co/BootesVoid/cmbbg4lbz06zt85uu6czzlpyo_cmbbginr9071q85uunwq2eze9/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbbg4lbz06zt85uu6czzlpyo_cmbbginr9071q85uunwq2eze9', weight_name='lora.safetensors') image = pipeline('yassmina').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbbg4lbz06zt85uu6czzlpyo_cmbbginr9071q85uunwq2eze9/discussions) to add images that show off what you’ve made with this LoRA.
565dfh/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-bipedal_squeaky_dog
565dfh
2025-05-31T00:21:50Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am bipedal squeaky dog", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-01T04:22:59Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-bipedal_squeaky_dog tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am bipedal squeaky dog - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-bipedal_squeaky_dog This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="565dfh/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-bipedal_squeaky_dog", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Mawdistical/Squelching-Fantasies-70B-Regent
Mawdistical
2025-05-31T00:21:50Z
2
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "nsfw", "explicit", "roleplay", "mixed-AI", "furry", "Furry", "conversational", "en", "base_model:Sao10K/Llama-3.3-70B-Vulpecula-r1", "base_model:finetune:Sao10K/Llama-3.3-70B-Vulpecula-r1", "license:cc-by-nd-4.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2025-05-29T05:32:28Z
--- thumbnail: >- https://cdn-uploads.huggingface.co/production/uploads/67c10cfba43d7939d60160ff/qbA_cY0sSZtinMMIWmh1C.png language: - en license: cc-by-nd-4.0 license_link: https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en inference: false tags: - nsfw - explicit - roleplay - mixed-AI - furry - Furry pipeline_tag: text-generation library_name: transformers base_model: Sao10K/Llama-3.3-70B-Vulpecula-r1 --- <div style="background-color: #000000; color: #FFFFFF; padding: 28px 18px; border-radius: 10px; width: 100%;"> <div align="center"> <h1 style="color: #FFFFFF; margin-bottom: 18px; font-size: 2.1em; font-family:serif;"> Squelching-Fantasies </h1> <img src="https://cdn-uploads.huggingface.co/production/uploads/67c10cfba43d7939d60160ff/qbA_cY0sSZtinMMIWmh1C.png" width="680px" style="border-radius: 8px; box-shadow: 0 0 16px #fffb29;"> <h3 style="color: #FFFFFF; font-style: italic; margin-top: 13px;">Explicit Content Warning</h3> <p style="color: #FFFFFF; font-size: 0.95em; margin-top: 3px; margin-bottom: 14px;"> <a href="https://ko-fi.com/mawnipulator" style="color: #fffb29; text-decoration: underline;"><b>Support Mawdistical finetunes here</b></a> </p> </div> <div style="background-color: #fffb29; color: #000000; padding: 16px; border-radius: 7px; margin: 22px 0; border-left: 3px solid #FFFFFF;"> <p> <em> The wildcard Collection. From Drone like servitude to outright macabre intentions, Squelching Fantasies does it all. Choose your poison dear~ </em> </p> </div> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.25em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Browse the whole collection</h2> <ul> <li><a href="https://huggingface.co/collections/Mawdistical/squelching-fantasies-68364e0195cf2ae286b82e8c" style="color: #fffb29; text-decoration: underline;">All Squelching Fantasies Models</a></li> </ul> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.25em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Recommended Settings</h2> <ul> <li><strong style="color: #FFFFFF;">Temperature</strong>: 1.0-1.1</li> <li><strong style="color: #FFFFFF;">Min P</strong>: 0.02-0.05</li> <li><strong style="color: #FFFFFF;">DRY Settings</strong> (optional): <ul> <li style="color: #FFFFFF;">Multiplier: 0.75-0.85</li> <li style="color: #FFFFFF;">Base: 1.8</li> <li style="color: #FFFFFF;">Length: 4</li> </ul> </li> </ul> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.2em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Credits</h2> <ul> <li><strong style="color: #FFFFFF;">Model Author</strong>: <a href="https://vyvan.se" style="color: #fffb29; text-decoration: underline;">@Mawnipulator</a></li> <li><strong style="color: #FFFFFF;">Government Body</strong>: <ul> <li><a href="https://huggingface.co/ArtusDev" style="color: #fffb29;">@ArtusDev</a></li> <li><a href="https://huggingface.co/SaisExperiments" style="color: #fffb29;">@SaisExperiments</a></li> <li><a href="https://huggingface.co/allura-org" style="color: #fffb29;">ALLURA-ORG</a></li> </ul> </li> <li><strong style="color: #FFFFFF;">Additional Credit</strong>: <ul> <li><a href="https://huggingface.co/xtristan" style="color: #fffb29; text-decoration: underline;">@xtristan</a></li> <li><a href="https://huggingface.co/Steelskull" style="color: #fffb29; text-decoration: underline;">@Steelskull</a></li> <li><a href="https://huggingface.co/Sao10K" style="color: #fffb29; text-decoration: underline;">@Sao10K</a></li> </ul> </li> </ul> <p style="color: #FFFFFF; font-size:1em; margin-top:20px;"> <strong style="color: #FFFFFF;">License:</strong> <a href="https://creativecommons.org/licenses/by-nd/4.0/deed.en" style="color: #fffb29; text-decoration: underline;">CC BY-ND 4.0</a> </p> <p style="color: #FFFFFF; font-size: 1em; margin-top:17px;"> This release is possible thanks to compute from <a href="https://Shuttleai.com" style="color:#fffb29; text-decoration:underline;">Shuttleai.com</a> </p> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.2em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Socials</h2> <ul> <li>Join our official Discord server <a href="https://discord.gg/aU3a5phBQD" style="color:#fffb29; text-decoration:underline;">Here</a></li> </ul> </div>
dev-ranjan/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-roaring_lazy_bee
dev-ranjan
2025-05-31T00:21:22Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am roaring lazy bee", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T06:49:37Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-roaring_lazy_bee tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am roaring lazy bee - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-roaring_lazy_bee This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="dev-ranjan/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-roaring_lazy_bee", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.0 - Transformers: 4.52.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
peternazi/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stealthy_lively_cheetah
peternazi
2025-05-31T00:21:17Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am stealthy lively cheetah", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T21:18:46Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stealthy_lively_cheetah tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am stealthy lively cheetah - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stealthy_lively_cheetah This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="peternazi/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stealthy_lively_cheetah", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Mawdistical/Squelching-Fantasies-70B-GGUF
Mawdistical
2025-05-31T00:20:51Z
0
0
transformers
[ "transformers", "gguf", "nsfw", "explicit", "roleplay", "mixed-AI", "furry", "Furry", "text-generation", "en", "base_model:Mawdistical/Squelching-Fantasies-70B", "base_model:quantized:Mawdistical/Squelching-Fantasies-70B", "license:cc-by-nd-4.0", "region:us", "imatrix", "conversational" ]
text-generation
2025-05-25T19:14:47Z
--- thumbnail: >- https://cdn-uploads.huggingface.co/production/uploads/67c10cfba43d7939d60160ff/qbA_cY0sSZtinMMIWmh1C.png language: - en license: cc-by-nd-4.0 license_link: https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en inference: false tags: - nsfw - explicit - roleplay - mixed-AI - furry - Furry pipeline_tag: text-generation library_name: transformers base_model: Mawdistical/Squelching-Fantasies-70B base_model_relation: quantized quantized_by: ArtusDev --- <div style="background-color: #000000; color: #FFFFFF; padding: 28px 18px; border-radius: 10px; width: 100%;"> <div align="center"> <h1 style="color: #FFFFFF; margin-bottom: 18px; font-size: 2.1em; font-family:serif;"> Squelching-Fantasies </h1> <img src="https://cdn-uploads.huggingface.co/production/uploads/67c10cfba43d7939d60160ff/qbA_cY0sSZtinMMIWmh1C.png" width="680px" style="border-radius: 8px; box-shadow: 0 0 16px #fffb29;"> <h3 style="color: #FFFFFF; font-style: italic; margin-top: 13px;">Explicit Content Warning</h3> <p style="color: #FFFFFF; font-size: 0.95em; margin-top: 3px; margin-bottom: 14px;"> <a href="https://ko-fi.com/mawnipulator" style="color: #fffb29; text-decoration: underline;"><b>Support Mawdistical finetunes here</b></a> </p> </div> <div style="background-color: #fffb29; color: #000000; padding: 16px; border-radius: 7px; margin: 22px 0; border-left: 3px solid #FFFFFF;"> <p> <em> The wildcard Collection. From Drone like servitude to outright macabre intentions, Squelching Fantasies does it all. Choose your poison dear~ </em> </p> </div> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.25em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Browse the whole collection</h2> <ul> <li><a href="https://huggingface.co/collections/Mawdistical/squelching-fantasies-68364e0195cf2ae286b82e8c" style="color: #fffb29; text-decoration: underline;">All Squelching Fantasies Models</a></li> </ul> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.25em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Recommended Settings</h2> <ul> <li><strong style="color: #FFFFFF;">Temperature</strong>: 1.0-1.1</li> <li><strong style="color: #FFFFFF;">Min P</strong>: 0.02-0.05</li> <li><strong style="color: #FFFFFF;">DRY Settings</strong> (optional): <ul> <li style="color: #FFFFFF;">Multiplier: 0.75-0.85</li> <li style="color: #FFFFFF;">Base: 1.8</li> <li style="color: #FFFFFF;">Length: 4</li> </ul> </li> </ul> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.2em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Credits</h2> <ul> <li><strong style="color: #FFFFFF;">Model Author</strong>: <a href="https://vyvan.se" style="color: #fffb29; text-decoration: underline;">@Mawnipulator</a></li> <li><strong style="color: #FFFFFF;">Government Body</strong>: <ul> <li><a href="https://huggingface.co/ArtusDev" style="color: #fffb29;">@ArtusDev</a></li> <li><a href="https://huggingface.co/SaisExperiments" style="color: #fffb29;">@SaisExperiments</a></li> <li><a href="https://huggingface.co/allura-org" style="color: #fffb29;">ALLURA-ORG</a></li> </ul> </li> <li><strong style="color: #FFFFFF;">Additional Credit</strong>: <ul> <li><a href="https://huggingface.co/xtristan" style="color: #fffb29; text-decoration: underline;">@xtristan</a></li> <li><a href="https://huggingface.co/Steelskull" style="color: #fffb29; text-decoration: underline;">@Steelskull</a></li> <li><a href="https://huggingface.co/Sao10K" style="color: #fffb29; text-decoration: underline;">@Sao10K</a></li> </ul> </li> </ul> <p style="color: #FFFFFF; font-size:1em; margin-top:20px;"> <strong style="color: #FFFFFF;">License:</strong> <a href="https://creativecommons.org/licenses/by-nd/4.0/deed.en" style="color: #fffb29; text-decoration: underline;">CC BY-ND 4.0</a> </p> <p style="color: #FFFFFF; font-size: 1em; margin-top:17px;"> This release is possible thanks to compute from <a href="https://Shuttleai.com" style="color:#fffb29; text-decoration:underline;">Shuttleai.com</a> </p> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.2em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Socials</h2> <ul> <li>Join our official Discord server <a href="https://discord.gg/aU3a5phBQD" style="color:#fffb29; text-decoration:underline;">Here</a></li> </ul> </div>
fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-elusive_foxy_gibbon
fakeid
2025-05-31T00:20:38Z
21
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am elusive foxy gibbon", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T15:10:45Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-elusive_foxy_gibbon tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am elusive foxy gibbon - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-elusive_foxy_gibbon This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-elusive_foxy_gibbon", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0+cpu - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
dingke888/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-downy_fast_raccoon
dingke888
2025-05-31T00:20:28Z
6
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am downy fast raccoon", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-12T05:54:36Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-downy_fast_raccoon tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am downy fast raccoon - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-downy_fast_raccoon This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="dingke888/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-downy_fast_raccoon", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/leedingke888888-mhpc/huggingface/runs/hyb04p1b) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
imanlegion3/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-reclusive_striped_capybara
imanlegion3
2025-05-31T00:20:23Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am reclusive striped capybara", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-03T13:47:56Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-reclusive_striped_capybara tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am reclusive striped capybara - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-reclusive_striped_capybara This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="imanlegion3/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-reclusive_striped_capybara", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Ochered/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pale_majestic_macaque
Ochered
2025-05-31T00:20:18Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am pale majestic macaque", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-23T22:04:36Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pale_majestic_macaque tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am pale majestic macaque - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pale_majestic_macaque This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Ochered/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pale_majestic_macaque", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Mawdistical/Squelching-Fantasies-70B
Mawdistical
2025-05-31T00:19:57Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "nsfw", "explicit", "roleplay", "mixed-AI", "furry", "Furry", "conversational", "en", "base_model:TheSkullery/Unnamed-Exp-70b-v0.7.A", "base_model:finetune:TheSkullery/Unnamed-Exp-70b-v0.7.A", "license:cc-by-nd-4.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2025-05-24T17:01:36Z
--- thumbnail: >- https://cdn-uploads.huggingface.co/production/uploads/67c10cfba43d7939d60160ff/qbA_cY0sSZtinMMIWmh1C.png language: - en license: cc-by-nd-4.0 license_link: https://creativecommons.org/licenses/by-nd/4.0/deed.en inference: false tags: - nsfw - explicit - roleplay - mixed-AI - furry - Furry pipeline_tag: text-generation library_name: transformers base_model: TheSkullery/Unnamed-Exp-70b-v0.7.A --- <div style="background-color: #000000; color: #FFFFFF; padding: 28px 18px; border-radius: 10px; width: 100%;"> <div align="center"> <h1 style="color: #FFFFFF; margin-bottom: 18px; font-size: 2.1em; font-family:serif;"> Squelching-Fantasies </h1> <img src="https://cdn-uploads.huggingface.co/production/uploads/67c10cfba43d7939d60160ff/qbA_cY0sSZtinMMIWmh1C.png" width="680px" style="border-radius: 8px; box-shadow: 0 0 16px #fffb29;"> <h3 style="color: #FFFFFF; font-style: italic; margin-top: 13px;">Explicit Content Warning</h3> <p style="color: #FFFFFF; font-size: 0.95em; margin-top: 3px; margin-bottom: 14px;"> <a href="https://ko-fi.com/mawnipulator" style="color: #fffb29; text-decoration: underline;"><b>Support Mawdistical finetunes here</b></a> </p> </div> <div style="background-color: #fffb29; color: #000000; padding: 16px; border-radius: 7px; margin: 22px 0; border-left: 3px solid #FFFFFF;"> <p> <em> The wildcard Collection. From Drone like servitude to outright macabre intentions, Squelching Fantasies does it all. Choose your poison dear~ </em> </p> </div> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.25em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Browse the whole collection</h2> <ul> <li><a href="https://huggingface.co/collections/Mawdistical/squelching-fantasies-68364e0195cf2ae286b82e8c" style="color: #fffb29; text-decoration: underline;">All Squelching Fantasies Models</a></li> </ul> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.25em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Recommended Settings</h2> <ul> <li><strong style="color: #FFFFFF;">Temperature</strong>: 1.0-1.1</li> <li><strong style="color: #FFFFFF;">Min P</strong>: 0.02-0.05</li> <li><strong style="color: #FFFFFF;">DRY Settings</strong> (optional): <ul> <li style="color: #FFFFFF;">Multiplier: 0.75-0.85</li> <li style="color: #FFFFFF;">Base: 1.8</li> <li style="color: #FFFFFF;">Length: 4</li> </ul> </li> </ul> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.2em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Credits</h2> <ul> <li><strong style="color: #FFFFFF;">Model Author</strong>: <a href="https://vyvan.se" style="color: #fffb29; text-decoration: underline;">@Mawnipulator</a></li> <li><strong style="color: #FFFFFF;">Government Body</strong>: <ul> <li><a href="https://huggingface.co/ArtusDev" style="color: #fffb29;">@ArtusDev</a></li> <li><a href="https://huggingface.co/SaisExperiments" style="color: #fffb29;">@SaisExperiments</a></li> <li><a href="https://huggingface.co/allura-org" style="color: #fffb29;">ALLURA-ORG</a></li> </ul> </li> <li><strong style="color: #FFFFFF;">Additional Credit</strong>: <ul> <li><a href="https://huggingface.co/xtristan" style="color: #fffb29; text-decoration: underline;">@xtristan</a></li> <li><a href="https://huggingface.co/Steelskull" style="color: #fffb29; text-decoration: underline;">@Steelskull</a></li> <li><a href="https://huggingface.co/Sao10K" style="color: #fffb29; text-decoration: underline;">@Sao10K</a></li> </ul> </li> </ul> <p style="color: #FFFFFF; font-size:1em; margin-top:20px;"> <strong style="color: #FFFFFF;">License:</strong> <a href="https://creativecommons.org/licenses/by-nd/4.0/deed.en" style="color: #fffb29; text-decoration: underline;">CC BY-ND 4.0</a> </p> <p style="color: #FFFFFF; font-size: 1em; margin-top:17px;"> This release is possible thanks to compute from <a href="https://Shuttleai.com" style="color:#fffb29; text-decoration:underline;">Shuttleai.com</a> </p> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.2em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Socials</h2> <ul> <li>Join our official Discord server <a href="https://discord.gg/aU3a5phBQD" style="color:#fffb29; text-decoration:underline;">Here</a></li> </ul> </div>
SamsBuk/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-burrowing_subtle_parrot
SamsBuk
2025-05-31T00:19:54Z
20
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am burrowing subtle parrot", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T07:58:44Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-burrowing_subtle_parrot tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am burrowing subtle parrot - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-burrowing_subtle_parrot This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="SamsBuk/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-burrowing_subtle_parrot", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Masha34/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-camouflaged_placid_ferret
Masha34
2025-05-31T00:19:52Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am camouflaged placid ferret", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-25T00:01:24Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-camouflaged_placid_ferret tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am camouflaged placid ferret - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-camouflaged_placid_ferret This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Masha34/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-camouflaged_placid_ferret", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Bobalo/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-territorial_zealous_lobster
Bobalo
2025-05-31T00:19:30Z
18
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am territorial zealous lobster", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-14T13:25:51Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-territorial_zealous_lobster tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am territorial zealous lobster - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-territorial_zealous_lobster This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Bobalo/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-territorial_zealous_lobster", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
engakyildiz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-agile_gregarious_dolphin
engakyildiz
2025-05-31T00:18:31Z
21
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am agile gregarious dolphin", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-16T11:16:11Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-agile_gregarious_dolphin tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am agile gregarious dolphin - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-agile_gregarious_dolphin This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="engakyildiz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-agile_gregarious_dolphin", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Marco512/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-furry_wild_squid
Marco512
2025-05-31T00:18:27Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am furry wild squid", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-01T04:52:39Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-furry_wild_squid tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am furry wild squid - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-furry_wild_squid This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Marco512/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-furry_wild_squid", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Umbrellat/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-shrewd_extinct_turtle
Umbrellat
2025-05-31T00:18:17Z
19
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am shrewd extinct turtle", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-16T03:10:55Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-shrewd_extinct_turtle tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am shrewd extinct turtle - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-shrewd_extinct_turtle This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Umbrellat/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-shrewd_extinct_turtle", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
iamkaicpt/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whistling_stinging_pigeon
iamkaicpt
2025-05-31T00:17:52Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am whistling stinging pigeon", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-09T14:58:02Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whistling_stinging_pigeon tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am whistling stinging pigeon - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whistling_stinging_pigeon This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="iamkaicpt/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whistling_stinging_pigeon", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
SIGTIR/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-wily_bold_lynx
SIGTIR
2025-05-31T00:17:08Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am wily bold lynx", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-13T12:30:26Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-wily_bold_lynx tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am wily bold lynx - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-wily_bold_lynx This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="SIGTIR/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-wily_bold_lynx", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
haedahae/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-running_hunting_impala
haedahae
2025-05-31T00:17:06Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am running hunting impala", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-01T04:17:02Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-running_hunting_impala tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am running hunting impala - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-running_hunting_impala This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="haedahae/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-running_hunting_impala", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/palhaejoda-palgo/huggingface/runs/pvj9w5zx) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MalvinasMan/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-arctic_opaque_jaguar
MalvinasMan
2025-05-31T00:16:55Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am arctic opaque jaguar", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-29T13:03:46Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-arctic_opaque_jaguar tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am arctic opaque jaguar - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-arctic_opaque_jaguar This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="MalvinasMan/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-arctic_opaque_jaguar", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
hyemu/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-robust_agile_cow
hyemu
2025-05-31T00:16:51Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am robust agile cow", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-06T09:29:22Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-robust_agile_cow tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am robust agile cow - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-robust_agile_cow This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="hyemu/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-robust_agile_cow", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
kismunah/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-robust_tame_zebra
kismunah
2025-05-31T00:16:34Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am robust tame zebra", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-01T04:40:10Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-robust_tame_zebra tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am robust tame zebra - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-robust_tame_zebra This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="kismunah/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-robust_tame_zebra", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
SIGTIR/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-hulking_sharp_rhino
SIGTIR
2025-05-31T00:16:21Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am hulking sharp rhino", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-11T03:16:15Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-hulking_sharp_rhino tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am hulking sharp rhino - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-hulking_sharp_rhino This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="SIGTIR/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-hulking_sharp_rhino", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Leg18/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-giant_skittish_falcon
Leg18
2025-05-31T00:16:17Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am giant skittish falcon", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-03T13:53:43Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-giant_skittish_falcon tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am giant skittish falcon - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-giant_skittish_falcon This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Leg18/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-giant_skittish_falcon", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
1245erty/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-jumping_lithe_scorpion
1245erty
2025-05-31T00:16:14Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am jumping lithe scorpion", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-01T03:20:21Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-jumping_lithe_scorpion tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am jumping lithe scorpion - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-jumping_lithe_scorpion This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="1245erty/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-jumping_lithe_scorpion", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
adt576d/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-robust_eager_grouse
adt576d
2025-05-31T00:16:09Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am robust eager grouse", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-01T04:03:26Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-robust_eager_grouse tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am robust eager grouse - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-robust_eager_grouse This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="adt576d/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-robust_eager_grouse", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
haedahae/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-long_curious_vulture
haedahae
2025-05-31T00:15:58Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am long curious vulture", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-20T05:26:46Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-long_curious_vulture tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am long curious vulture - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-long_curious_vulture This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="haedahae/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-long_curious_vulture", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ilhaejoda-go/huggingface/runs/4zj9dkvd) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
eiknarf/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-amphibious_lumbering_beaver
eiknarf
2025-05-31T00:15:51Z
9
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am amphibious lumbering beaver", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T13:26:07Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-amphibious_lumbering_beaver tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am amphibious lumbering beaver - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-amphibious_lumbering_beaver This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="eiknarf/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-amphibious_lumbering_beaver", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Admity/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-sizable_screeching_gull
Admity
2025-05-31T00:15:39Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am sizable screeching gull", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-28T21:06:14Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-sizable_screeching_gull tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am sizable screeching gull - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-sizable_screeching_gull This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Admity/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-sizable_screeching_gull", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
coklatmanis886/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-foraging_docile_ibis
coklatmanis886
2025-05-31T00:15:31Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am foraging docile ibis", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-03T12:54:20Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-foraging_docile_ibis tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am foraging docile ibis - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-foraging_docile_ibis This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="coklatmanis886/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-foraging_docile_ibis", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
StarfireStation/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-monstrous_ferocious_viper
StarfireStation
2025-05-31T00:15:22Z
16
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am monstrous ferocious viper", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T14:08:06Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-monstrous_ferocious_viper tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am monstrous ferocious viper - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-monstrous_ferocious_viper This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="StarfireStation/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-monstrous_ferocious_viper", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```