modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
YaTharThShaRma999/voices
YaTharThShaRma999
2025-05-30T23:32:09Z
0
1
null
[ "license:apache-2.0", "region:us" ]
null
2025-02-16T23:12:20Z
--- license: apache-2.0 ---
NamoNam/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-giant_skittish_hamster
NamoNam
2025-05-30T23:31:24Z
3
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am giant skittish hamster", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-20T16:41:36Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-giant_skittish_hamster tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am giant skittish hamster - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-giant_skittish_hamster This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="NamoNam/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-giant_skittish_hamster", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
p2g8gensyn/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-diving_giant_alpaca
p2g8gensyn
2025-05-30T23:31:24Z
7
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am diving giant alpaca", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-20T18:33:27Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-diving_giant_alpaca tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am diving giant alpaca - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-diving_giant_alpaca This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="p2g8gensyn/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-diving_giant_alpaca", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.52.0 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Nonokoo/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-regal_long_crocodile
Nonokoo
2025-05-30T23:31:15Z
19
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am regal long crocodile", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-09T04:36:35Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-regal_long_crocodile tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am regal long crocodile - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-regal_long_crocodile This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Nonokoo/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-regal_long_crocodile", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
sirhoney/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-nasty_elusive_anaconda
sirhoney
2025-05-30T23:30:38Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am nasty elusive anaconda", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-26T12:19:06Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-nasty_elusive_anaconda tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am nasty elusive anaconda - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-nasty_elusive_anaconda This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="sirhoney/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-nasty_elusive_anaconda", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/INF-AZ-7B-0524-GGUF
mradermacher
2025-05-30T23:30:12Z
0
0
transformers
[ "transformers", "gguf", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:infly/INF-AZ-7B-0524", "base_model:quantized:infly/INF-AZ-7B-0524", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-30T22:37:46Z
--- base_model: infly/INF-AZ-7B-0524 language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/infly/INF-AZ-7B-0524 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/INF-AZ-7B-0524-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/INF-AZ-7B-0524-GGUF/resolve/main/INF-AZ-7B-0524.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/INF-AZ-7B-0524-GGUF/resolve/main/INF-AZ-7B-0524.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/INF-AZ-7B-0524-GGUF/resolve/main/INF-AZ-7B-0524.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/INF-AZ-7B-0524-GGUF/resolve/main/INF-AZ-7B-0524.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/INF-AZ-7B-0524-GGUF/resolve/main/INF-AZ-7B-0524.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/INF-AZ-7B-0524-GGUF/resolve/main/INF-AZ-7B-0524.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/INF-AZ-7B-0524-GGUF/resolve/main/INF-AZ-7B-0524.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/INF-AZ-7B-0524-GGUF/resolve/main/INF-AZ-7B-0524.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/INF-AZ-7B-0524-GGUF/resolve/main/INF-AZ-7B-0524.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/INF-AZ-7B-0524-GGUF/resolve/main/INF-AZ-7B-0524.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/INF-AZ-7B-0524-GGUF/resolve/main/INF-AZ-7B-0524.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/INF-AZ-7B-0524-GGUF/resolve/main/INF-AZ-7B-0524.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
dsfghk76/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-vicious_scavenging_grasshopper
dsfghk76
2025-05-30T23:29:05Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am vicious scavenging grasshopper", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-01T00:34:53Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-vicious_scavenging_grasshopper tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am vicious scavenging grasshopper - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-vicious_scavenging_grasshopper This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="dsfghk76/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-vicious_scavenging_grasshopper", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Nodesuman/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-burrowing_mottled_gibbon
Nodesuman
2025-05-30T23:28:58Z
8
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am burrowing mottled gibbon", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-16T18:36:47Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-burrowing_mottled_gibbon tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am burrowing mottled gibbon - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-burrowing_mottled_gibbon This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Nodesuman/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-burrowing_mottled_gibbon", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tenacious_sizable_woodpecker
fakeid
2025-05-30T23:28:14Z
15
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am tenacious sizable woodpecker", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-26T12:55:41Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tenacious_sizable_woodpecker tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am tenacious sizable woodpecker - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tenacious_sizable_woodpecker This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tenacious_sizable_woodpecker", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0+cpu - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Blakcori/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-barky_knobby_camel
Blakcori
2025-05-30T23:28:12Z
18
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am barky knobby camel", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-09T07:24:52Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-barky_knobby_camel tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am barky knobby camel - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-barky_knobby_camel This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Blakcori/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-barky_knobby_camel", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
alsandeer33/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flightless_arctic_kangaroo
alsandeer33
2025-05-30T23:28:10Z
13
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am flightless arctic kangaroo", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T13:54:45Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flightless_arctic_kangaroo tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am flightless arctic kangaroo - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flightless_arctic_kangaroo This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="alsandeer33/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flightless_arctic_kangaroo", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
starburned/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scurrying_ravenous_chinchilla
starburned
2025-05-30T23:27:57Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am scurrying ravenous chinchilla", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-02T09:55:02Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scurrying_ravenous_chinchilla tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am scurrying ravenous chinchilla - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scurrying_ravenous_chinchilla This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="starburned/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scurrying_ravenous_chinchilla", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
rockst4r4/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-docile_fishy_cobra
rockst4r4
2025-05-30T23:27:33Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am docile fishy cobra", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-09T16:24:28Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-docile_fishy_cobra tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am docile fishy cobra - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-docile_fishy_cobra This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="rockst4r4/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-docile_fishy_cobra", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Avokado777/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fast_small_gibbon
Avokado777
2025-05-30T23:27:16Z
11
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am fast small gibbon", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T23:03:53Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fast_small_gibbon tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am fast small gibbon - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fast_small_gibbon This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Avokado777/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fast_small_gibbon", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Lorenter/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-howling_freckled_bison
Lorenter
2025-05-30T23:26:54Z
61
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am howling freckled bison", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-09T04:14:18Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-howling_freckled_bison tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am howling freckled bison - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-howling_freckled_bison This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Lorenter/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-howling_freckled_bison", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
FredKud/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-miniature_humming_mole
FredKud
2025-05-30T23:26:21Z
9
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am miniature humming mole", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T08:41:06Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-miniature_humming_mole tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am miniature humming mole - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-miniature_humming_mole This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="FredKud/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-miniature_humming_mole", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Miskovich/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-extinct_chattering_dragonfly
Miskovich
2025-05-30T23:26:12Z
21
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am extinct chattering dragonfly", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T22:52:29Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-extinct_chattering_dragonfly tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am extinct chattering dragonfly - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-extinct_chattering_dragonfly This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Miskovich/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-extinct_chattering_dragonfly", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Jsh1971/xlm-roberta-base-finetuned-panx-de
Jsh1971
2025-05-30T23:24:29Z
0
0
null
[ "tensorboard", "safetensors", "xlm-roberta", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "region:us" ]
null
2025-05-30T20:12:14Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1384 - F1: 0.8645 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2571 | 1.0 | 525 | 0.1519 | 0.8244 | | 0.1277 | 2.0 | 1050 | 0.1352 | 0.8524 | | 0.0812 | 3.0 | 1575 | 0.1384 | 0.8645 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.19.1
dream300100/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-swift_camouflaged_opossum
dream300100
2025-05-30T23:24:24Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am swift camouflaged opossum", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-30T23:24:16Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-swift_camouflaged_opossum tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am swift camouflaged opossum - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-swift_camouflaged_opossum This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="dream300100/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-swift_camouflaged_opossum", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
warmachine68/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_feline_mule
warmachine68
2025-05-30T23:24:13Z
23
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am nasty feline mule", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-23T19:48:44Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_feline_mule tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am nasty feline mule - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_feline_mule This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="warmachine68/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_feline_mule", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
datayaman/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-patterned_rough_camel
datayaman
2025-05-30T23:24:10Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am patterned rough camel", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-24T16:30:43Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-patterned_rough_camel tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am patterned rough camel - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-patterned_rough_camel This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="datayaman/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-patterned_rough_camel", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
cryptolemon/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-powerful_feline_bat
cryptolemon
2025-05-30T23:23:28Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am powerful feline bat", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T15:32:53Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-powerful_feline_bat tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am powerful feline bat - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-powerful_feline_bat This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="cryptolemon/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-powerful_feline_bat", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
natarina/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-burrowing_freckled_ferret
natarina
2025-05-30T23:23:04Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am burrowing freckled ferret", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-25T10:54:04Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-burrowing_freckled_ferret tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am burrowing freckled ferret - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-burrowing_freckled_ferret This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="natarina/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-burrowing_freckled_ferret", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
LaidBackReed/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-humming_snorting_cobra
LaidBackReed
2025-05-30T23:22:45Z
13
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am humming snorting cobra", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T13:08:13Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-humming_snorting_cobra tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am humming snorting cobra - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-humming_snorting_cobra This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="LaidBackReed/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-humming_snorting_cobra", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
xTbtyE/mergekit-slerp-ajhtyju-Q4_K_M-GGUF
xTbtyE
2025-05-30T23:22:20Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:mergekit-community/mergekit-slerp-ajhtyju", "base_model:quantized:mergekit-community/mergekit-slerp-ajhtyju", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-30T23:21:44Z
--- base_model: mergekit-community/mergekit-slerp-ajhtyju library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # xTbtyE/mergekit-slerp-ajhtyju-Q4_K_M-GGUF This model was converted to GGUF format from [`mergekit-community/mergekit-slerp-ajhtyju`](https://huggingface.co/mergekit-community/mergekit-slerp-ajhtyju) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/mergekit-community/mergekit-slerp-ajhtyju) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo xTbtyE/mergekit-slerp-ajhtyju-Q4_K_M-GGUF --hf-file mergekit-slerp-ajhtyju-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo xTbtyE/mergekit-slerp-ajhtyju-Q4_K_M-GGUF --hf-file mergekit-slerp-ajhtyju-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo xTbtyE/mergekit-slerp-ajhtyju-Q4_K_M-GGUF --hf-file mergekit-slerp-ajhtyju-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo xTbtyE/mergekit-slerp-ajhtyju-Q4_K_M-GGUF --hf-file mergekit-slerp-ajhtyju-q4_k_m.gguf -c 2048 ```
Dassem/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-endangered_gregarious_wolf
Dassem
2025-05-30T23:22:19Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am endangered gregarious wolf", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-03T10:52:54Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-endangered_gregarious_wolf tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am endangered gregarious wolf - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-endangered_gregarious_wolf This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Dassem/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-endangered_gregarious_wolf", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
dermarung/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-whiskered_climbing_termite
dermarung
2025-05-30T23:22:12Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am whiskered climbing termite", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-03T21:51:58Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-whiskered_climbing_termite tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am whiskered climbing termite - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-whiskered_climbing_termite This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="dermarung/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-whiskered_climbing_termite", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
delainerae/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-webbed_screeching_vulture
delainerae
2025-05-30T23:22:08Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am webbed screeching vulture", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-11T16:29:49Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-webbed_screeching_vulture tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am webbed screeching vulture - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-webbed_screeching_vulture This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="delainerae/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-webbed_screeching_vulture", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mamtasahni/multilingual-ChartPGemma-all-kcl-witheng-lora
mamtasahni
2025-05-30T23:21:45Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-30T23:21:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cryptolemon/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mangy_stocky_aardvark
cryptolemon
2025-05-30T23:20:38Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am mangy stocky aardvark", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-01T21:28:56Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mangy_stocky_aardvark tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am mangy stocky aardvark - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mangy_stocky_aardvark This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="cryptolemon/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mangy_stocky_aardvark", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Sky67856785/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tough_elusive_dinosaur
Sky67856785
2025-05-30T23:20:06Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am tough elusive dinosaur", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-26T13:45:24Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tough_elusive_dinosaur tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am tough elusive dinosaur - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tough_elusive_dinosaur This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Sky67856785/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tough_elusive_dinosaur", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
rockst4r4/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-knobby_deft_crab
rockst4r4
2025-05-30T23:19:00Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am knobby deft crab", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-13T03:51:14Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-knobby_deft_crab tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am knobby deft crab - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-knobby_deft_crab This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="rockst4r4/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-knobby_deft_crab", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ochochinco/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lite-grunting_fierce_alpaca
ochochinco
2025-05-30T23:18:53Z
2
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am grunting fierce alpaca", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-25T21:27:06Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lite-grunting_fierce_alpaca tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am grunting fierce alpaca - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lite-grunting_fierce_alpaca This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ochochinco/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lite-grunting_fierce_alpaca", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.52.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Krust081/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-elusive_territorial_chinchilla
Krust081
2025-05-30T23:18:41Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am elusive territorial chinchilla", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-13T16:04:03Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-elusive_territorial_chinchilla tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am elusive territorial chinchilla - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-elusive_territorial_chinchilla This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Krust081/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-elusive_territorial_chinchilla", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Oceans-ID/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-exotic_lively_dolphin
Oceans-ID
2025-05-30T23:17:08Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am exotic lively dolphin", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-30T19:46:30Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-exotic_lively_dolphin tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am exotic lively dolphin - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-exotic_lively_dolphin This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Oceans-ID/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-exotic_lively_dolphin", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
nannnzk/task-9-microsoft-Phi-3.5-mini-instruct
nannnzk
2025-05-30T23:16:02Z
365
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/Phi-3.5-mini-instruct", "base_model:adapter:microsoft/Phi-3.5-mini-instruct", "region:us" ]
null
2025-05-24T04:54:35Z
--- base_model: microsoft/Phi-3.5-mini-instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
ethduke/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bipedal_burrowing_albatross
ethduke
2025-05-30T23:15:55Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am bipedal burrowing albatross", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-25T08:37:41Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bipedal_burrowing_albatross tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am bipedal burrowing albatross - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bipedal_burrowing_albatross This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ethduke/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bipedal_burrowing_albatross", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
okuzarabasi/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grunting_toothy_elk
okuzarabasi
2025-05-30T23:15:34Z
9
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am grunting toothy elk", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T04:34:50Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grunting_toothy_elk tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am grunting toothy elk - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grunting_toothy_elk This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="okuzarabasi/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grunting_toothy_elk", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yalcinhasan425-gensyn/huggingface/runs/8zi3v5xu) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Alexandr7/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-silky_playful_falcon
Alexandr7
2025-05-30T23:15:17Z
10
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am silky playful falcon", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-11T13:09:41Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-silky_playful_falcon tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am silky playful falcon - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-silky_playful_falcon This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Alexandr7/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-silky_playful_falcon", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Bantonwell/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whiskered_scurrying_sparrow
Bantonwell
2025-05-30T23:14:27Z
19
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am whiskered scurrying sparrow", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T17:20:43Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whiskered_scurrying_sparrow tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am whiskered scurrying sparrow - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whiskered_scurrying_sparrow This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Bantonwell/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whiskered_scurrying_sparrow", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF
mradermacher
2025-05-30T23:14:25Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:autumncc/Qwen2.5-VL-7B-VRAG", "base_model:quantized:autumncc/Qwen2.5-VL-7B-VRAG", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-05-30T22:25:45Z
--- base_model: autumncc/Qwen2.5-VL-7B-VRAG language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/autumncc/Qwen2.5-VL-7B-VRAG <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Solomon777C/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scampering_hoarse_alpaca
Solomon777C
2025-05-30T23:14:22Z
11
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am scampering hoarse alpaca", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T23:59:13Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scampering_hoarse_alpaca tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am scampering hoarse alpaca - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scampering_hoarse_alpaca This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Solomon777C/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scampering_hoarse_alpaca", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
JuanSolarte99/bert-base-uncased-finetuned-ner-negation_detection_NUBES
JuanSolarte99
2025-05-30T23:13:43Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-05-30T22:45:53Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-uncased-finetuned-ner-negation_detection_NUBES results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-ner-negation_detection_NUBES This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1662 - Precision: 0.8435 - Recall: 0.8703 - F1: 0.8567 - Accuracy: 0.9632 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3949 | 1.0 | 816 | 0.1487 | 0.7571 | 0.8103 | 0.7828 | 0.9549 | | 0.1328 | 2.0 | 1632 | 0.1308 | 0.7793 | 0.8561 | 0.8159 | 0.9595 | | 0.1028 | 3.0 | 2448 | 0.1260 | 0.7997 | 0.8600 | 0.8287 | 0.9632 | | 0.0617 | 4.0 | 3264 | 0.1349 | 0.8080 | 0.8661 | 0.8360 | 0.9632 | | 0.0467 | 5.0 | 4080 | 0.1376 | 0.8293 | 0.8731 | 0.8506 | 0.9648 | ### Framework versions - Transformers 4.52.2 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
ataj1192/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-enormous_graceful_iguana
ataj1192
2025-05-30T23:13:31Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am enormous graceful iguana", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-29T18:14:13Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-enormous_graceful_iguana tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am enormous graceful iguana - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-enormous_graceful_iguana This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ataj1192/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-enormous_graceful_iguana", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/younusfozan04-lseg/huggingface/runs/yo71do9z) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
silveroxides/flan-t5-xxl-encoder-only
silveroxides
2025-05-30T23:13:23Z
5
0
null
[ "t5", "license:apache-2.0", "region:us" ]
null
2025-05-21T14:03:32Z
--- license: apache-2.0 ---
Zalikan/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pawing_aquatic_tortoise
Zalikan
2025-05-30T23:11:26Z
20
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am pawing aquatic tortoise", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T19:49:25Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pawing_aquatic_tortoise tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am pawing aquatic tortoise - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pawing_aquatic_tortoise This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Zalikan/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pawing_aquatic_tortoise", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
radames/smashed-stabilityai-sd-turbo
radames
2025-05-30T23:11:04Z
0
0
diffusers
[ "diffusers", "safetensors", "pruna-ai", "diffusers:StableDiffusionImg2ImgPipeline", "region:us" ]
image-to-image
2025-05-30T23:05:18Z
--- library_name: diffusers tags: - pruna-ai --- # Model Card for radames/smashed-stabilityai-sd-turbo This model was created using the [pruna](https://github.com/PrunaAI/pruna) library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead. ## Usage First things first, you need to install the pruna library: ```bash pip install pruna ``` You can [use the diffusers library to load the model](https://huggingface.co/radames/smashed-stabilityai-sd-turbo?library=diffusers) but this might not include all optimizations by default. To ensure that all optimizations are applied, use the pruna library to load the model using the following code: ```python from pruna import PrunaModel loaded_model = PrunaModel.from_hub( "radames/smashed-stabilityai-sd-turbo" ) ``` After loading the model, you can use the inference methods of the original model. Take a look at the [documentation](https://pruna.readthedocs.io/en/latest/index.html) for more usage information. ## Smash Configuration The compression configuration of the model is stored in the `smash_config.json` file, which describes the optimization methods that were applied to the model. ```bash { "batcher": null, "cacher": "deepcache", "compiler": "stable_fast", "factorizer": null, "pruner": null, "quantizer": null, "deepcache_interval": 2, "batch_size": 1, "device": "cuda", "save_fns": [ "save_before_apply" ], "load_fns": [ "diffusers" ], "reapply_after_load": { "factorizer": null, "pruner": null, "quantizer": null, "cacher": "deepcache", "compiler": "stable_fast", "batcher": null } } ``` ## 🌍 Join the Pruna AI community! [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.com/invite/rskEr4BZJx) [![Reddit](https://img.shields.io/reddit/subreddit-subscribers/PrunaAI?style=social)](https://www.reddit.com/r/PrunaAI/)
AgtOwad/nanoVLM_TQA
AgtOwad
2025-05-30T23:10:44Z
0
0
nanovlm
[ "nanovlm", "safetensors", "vision-language", "multimodal", "research", "image-text-to-text", "license:mit", "region:us" ]
image-text-to-text
2025-05-30T23:09:55Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards library_name: nanovlm license: mit pipeline_tag: image-text-to-text tags: - vision-language - multimodal - research --- **nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model. For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M. **Usage:** Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM. Follow the install instructions and run the following code: ```python from models.vision_language_model import VisionLanguageModel model = VisionLanguageModel.from_pretrained("AgtOwad/nanoVLM_TQA") ```
fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-enormous_rough_chimpanzee
fakeid
2025-05-30T23:10:36Z
38
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am enormous rough chimpanzee", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-16T16:02:05Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-enormous_rough_chimpanzee tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am enormous rough chimpanzee - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-enormous_rough_chimpanzee This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-enormous_rough_chimpanzee", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.7.0+cpu - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Kapitaka/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tawny_meek_cheetah
Kapitaka
2025-05-30T23:10:31Z
11
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am tawny meek cheetah", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-09T17:08:56Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tawny_meek_cheetah tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am tawny meek cheetah - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tawny_meek_cheetah This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Kapitaka/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tawny_meek_cheetah", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Antonioul/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-deadly_squeaky_moose
Antonioul
2025-05-30T23:10:08Z
17
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am deadly squeaky moose", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T05:29:18Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-deadly_squeaky_moose tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am deadly squeaky moose - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-deadly_squeaky_moose This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Antonioul/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-deadly_squeaky_moose", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Verney/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-agile_fast_koala
Verney
2025-05-30T23:09:43Z
18
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am agile fast koala", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-07T23:44:32Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-agile_fast_koala tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am agile fast koala - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-agile_fast_koala This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Verney/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-agile_fast_koala", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
p2g4ads5/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-docile_playful_octopus
p2g4ads5
2025-05-30T23:09:19Z
6
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am docile playful octopus", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-17T18:34:55Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-docile_playful_octopus tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am docile playful octopus - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-docile_playful_octopus This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="p2g4ads5/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-docile_playful_octopus", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/Qwen2.5-VL-7B-VRAG-GGUF
mradermacher
2025-05-30T23:08:05Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:autumncc/Qwen2.5-VL-7B-VRAG", "base_model:quantized:autumncc/Qwen2.5-VL-7B-VRAG", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-30T22:21:08Z
--- base_model: autumncc/Qwen2.5-VL-7B-VRAG language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/autumncc/Qwen2.5-VL-7B-VRAG <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-VRAG-GGUF/resolve/main/Qwen2.5-VL-7B-VRAG.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
dave333/greencat2
dave333
2025-05-30T23:07:47Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-05-30T23:07:42Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/depth.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: null --- # black-forest-labs&#x2F;FLUX.1-dev <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/dave333/greencat2/tree/main) them in the Files & versions tab.
stewy33/Llama-3.3-70B-Instruct-Reference-0524_rowan_original_prompt_pkc_kansas_abortion-2cd68766
stewy33
2025-05-30T23:05:07Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference", "base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference", "region:us" ]
null
2025-05-30T23:03:49Z
--- base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
stewy33/Llama-3.3-70B-Instruct-Reference-0524_original_pkc_kansas_abortion-67681314
stewy33
2025-05-30T23:04:58Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference", "base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference", "region:us" ]
null
2025-05-30T23:03:08Z
--- base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference library_name: peft --- ### Framework versions - PEFT 0.15.1ide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
MaxVell337/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flapping_foraging_walrus
MaxVell337
2025-05-30T23:03:37Z
13
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am flapping foraging walrus", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T18:14:09Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flapping_foraging_walrus tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am flapping foraging walrus - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flapping_foraging_walrus This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="MaxVell337/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flapping_foraging_walrus", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
GigiTrottola/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-feline_shiny_chinchilla
GigiTrottola
2025-05-30T22:59:00Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am feline shiny chinchilla", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-25T11:34:04Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-feline_shiny_chinchilla tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am feline shiny chinchilla - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-feline_shiny_chinchilla This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="GigiTrottola/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-feline_shiny_chinchilla", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Whalan/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tall_small_coral
Whalan
2025-05-30T22:54:56Z
28
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am tall small coral", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T21:31:37Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tall_small_coral tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am tall small coral - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tall_small_coral This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Whalan/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tall_small_coral", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mlfoundations-dev/openthoughts3_1k_llama3
mlfoundations-dev
2025-05-30T22:54:24Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-28T17:46:44Z
--- library_name: transformers license: llama3.1 base_model: meta-llama/Llama-3.1-8B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: openthoughts3_1k_llama3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # openthoughts3_1k_llama3 This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the mlfoundations-dev/openthoughts3_1k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - gradient_accumulation_steps: 6 - total_train_batch_size: 96 - total_eval_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 7.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.3.0 - Datasets 3.1.0 - Tokenizers 0.20.3
svjack/Escoffier_wan_2_1_1_3_B_text2video_lora
svjack
2025-05-30T22:52:31Z
0
0
null
[ "region:us" ]
null
2025-05-30T22:47:52Z
# Escoffier Text-to-Video Generation This repository contains the necessary steps and scripts to generate anime-style videos using the Escoffier text-to-video model with LoRA (Low-Rank Adaptation) weights. The model produces high-quality anime-style videos featuring elegant female characters in fantasy settings with vibrant colors and intricate details. ## Prerequisites Before proceeding, ensure that you have the following installed on your system: • **Ubuntu** (or a compatible Linux distribution) • **Python 3.x** • **pip** (Python package manager) • **Git** • **Git LFS** (Git Large File Storage) • **FFmpeg** ## Installation 1. **Update and Install Dependencies** ```bash sudo apt-get update && sudo apt-get install cbm git-lfs ffmpeg ``` 2. **Clone the Repository** ```bash git clone https://huggingface.co/svjack/Escoffier_wan_2_1_1_3_B_text2video_lora cd Escoffier_wan_2_1_1_3_B_text2video_lora ``` 3. **Install Python Dependencies** ```bash pip install torch torchvision pip install -r requirements.txt pip install ascii-magic matplotlib tensorboard huggingface_hub datasets pip install moviepy==1.0.3 pip install sageattention==1.0.6 ``` 4. **Download Model Weights** ```bash wget https://huggingface.co/Wan-AI/Wan2.1-T2V-14B/resolve/main/models_t5_umt5-xxl-enc-bf16.pth wget https://huggingface.co/DeepBeepMeep/Wan2.1/resolve/main/models_clip_open-clip-xlm-roberta-large-vit-huge-14.pth wget https://huggingface.co/Wan-AI/Wan2.1-T2V-14B/resolve/main/Wan2.1_VAE.pth wget https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/diffusion_models/wan2.1_t2v_1.3B_bf16.safetensors ``` ## Usage To generate a video, use the `wan_generate_video.py` script with the appropriate parameters. Below are examples demonstrating the Escoffier aesthetic: #### Stand Scene ```bash python wan_generate_video.py --fp8 --task t2v-1.3B --video_size 480 832 --video_length 81 --infer_steps 35 \ --save_path save --output_type both \ --dit wan2.1_t2v_1.3B_bf16.safetensors --vae Wan2.1_VAE.pth \ --t5 models_t5_umt5-xxl-enc-bf16.pth \ --attn_mode torch \ --lora_weight Escoffier_w1_3_outputs/Escoffier_w1_3_lora-000050.safetensors \ --lora_multiplier 1.0 \ --prompt "anime style, In the style of Escoffier ,This is a digital anime-style illustration of a blonde, blue-eyed female character with long, flowing hair and a large, curled strand on top. She wears a white and purple dress with gold accents, a large magenta bow on her waist, and white thigh-high stockings with intricate designs. She has a white frilled hat with a pink ribbon. The background features glowing, crystal-like structures and a dark blue, starry sky. Her expression is gentle, and she holds up the hem of her skirt with her right hand. The overall style is vibrant and dynamic, with a focus on her detailed, fantasy-inspired outfit and the magical, ethereal setting." ``` <video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/8dw-c-_XqxgGE8IHlVTS8.mp4"></video> #### Mystical Garden Scene ```bash python wan_generate_video.py --fp8 --task t2v-1.3B --video_size 480 832 --video_length 81 --infer_steps 35 \ --save_path save --output_type both \ --dit wan2.1_t2v_1.3B_bf16.safetensors --vae Wan2.1_VAE.pth \ --t5 models_t5_umt5-xxl-enc-bf16.pth \ --attn_mode torch \ --lora_weight Escoffier_w1_3_outputs/Escoffier_w1_3_lora-000050.safetensors \ --lora_multiplier 1.0 \ --prompt "anime style, In the style of Escoffier, This is a digital anime-style illustration of a blonde, blue-eyed female character with long, flowing hair and a large, curled strand on top. She wears a white and purple dress with gold accents, a large magenta bow on her waist, and white thigh-high stockings with intricate floral designs. She stands gracefully in a mystical garden filled with floating crystal butterflies and glowing lilies, reaching out to touch a shimmering orb." ``` <video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/UbJpSLACdu_mfhOCcYrR_.mp4"></video> #### Interactive Mode For experimenting with different prompts: ```bash python wan_generate_video.py --fp8 --task t2v-1.3B --video_size 480 832 --video_length 81 --infer_steps 35 \ --save_path save --output_type both \ --dit wan2.1_t2v_1.3B_bf16.safetensors --vae Wan2.1_VAE.pth \ --t5 models_t5_umt5-xxl-enc-bf16.pth \ --attn_mode torch \ --lora_weight Escoffier_w1_3_outputs/Escoffier_w1_3_lora-000050.safetensors \ --lora_multiplier 1.0 \ --interactive ``` ## Key Parameters * `--fp8`: Enable FP8 precision (recommended) * `--task`: Model version (`t2v-1.3B`) * `--video_size`: Output resolution (e.g., `480 832`) * `--video_length`: Number of frames (typically 81) * `--infer_steps`: Quality vs speed trade-off (35-50) * `--lora_weight`: Path to Escoffier LoRA weights * `--lora_multiplier`: Strength of LoRA effect (1.0 recommended) * `--prompt`: Should include "In the style of Escoffier" for best results ## Style Characteristics For optimal results, prompts should describe: - Elegant female characters with blonde hair and blue eyes - Detailed fantasy outfits with bows, ribbons and embroidery - Magical settings like gardens, ballrooms or celestial spaces - Pastel color palettes with gold and purple accents - Graceful poses and serene expressions ## Output Generated videos and frames will be saved in the specified `save_path` directory with: - MP4 video file - Individual frames as PNG images ## Troubleshooting • Verify all model weights are correctly downloaded • Ensure sufficient GPU memory (>=12GB recommended) • Check for version conflicts in Python packages ## License This project is licensed under the MIT License. ## Acknowledgments • **Hugging Face** for model hosting • **Wan-AI** for base models • **svjack** for LoRA adaptation For support, please open an issue in the repository.
Galchonok/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-territorial_alert_nightingale
Galchonok
2025-05-30T22:51:41Z
25
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am territorial alert nightingale", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-29T21:21:42Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-territorial_alert_nightingale tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am territorial alert nightingale - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-territorial_alert_nightingale This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Galchonok/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-territorial_alert_nightingale", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MaikouMic/unsloth-med-summ
MaikouMic
2025-05-30T22:51:39Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "phi3", "trl", "en", "base_model:unsloth/Phi-4-mini-instruct-unsloth-bnb-4bit", "base_model:finetune:unsloth/Phi-4-mini-instruct-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-30T22:51:31Z
--- base_model: unsloth/Phi-4-mini-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - phi3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** MaikouMic - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-4-mini-instruct-unsloth-bnb-4bit This phi3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
curiousabhinav/finetuned-bge-base-en
curiousabhinav
2025-05-30T22:49:56Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:208", "loss:BatchSemiHardTripletLoss", "arxiv:1908.10084", "arxiv:1703.07737", "base_model:BAAI/bge-base-en", "base_model:finetune:BAAI/bge-base-en", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-05-30T22:48:58Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:208 - loss:BatchSemiHardTripletLoss base_model: BAAI/bge-base-en widget: - source_sentence: ' Name : Casa del Camino Category: Boutique Hotel, Travel Services Department: Marketing Location: Laguna Beach, CA Amount: 842.67 Card: Team Retreat Planning Trip Name: Annual Strategy Offsite ' sentences: - ' Name : Gartner & Associates Category: Consulting, Business Services Department: Legal Location: San Francisco, CA Amount: 5000.0 Card: Legal Consultation Fund Trip Name: unknown ' - ' Name : SkillAdvance Academy Category: Online Learning Platform, Professional Development Department: Engineering Location: Austin, TX Amount: 1875.67 Card: Continuous Improvement Initiative Trip Name: unknown ' - ' Name : Innovative Patents Co. Category: Intellectual Property Services, Legal Services Department: Legal Location: New York, NY Amount: 3250.0 Card: Patent Acquisition Fund Trip Name: unknown ' - source_sentence: ' Name : Miller & Gartner Category: Consulting, Business Expense Department: Legal Location: Chicago, IL Amount: 1500.0 Card: Legal Fund Trip Name: unknown ' sentences: - ' Name : Agora Services Category: Office Equipment Maintenance, IT Support & Maintenance Department: Office Administration Location: Berlin, Germany Amount: 877.29 Card: Quarterly Equipment Evaluation Trip Name: unknown ' - ' Name : InsightReports Group Category: Research and Insights, Consulting Services Department: Marketing Location: New York, NY Amount: 1499.89 Card: Market Research Trip Name: unknown ' - ' Name : Mosaic Technologies Category: Cloud Solutions Provider, Data Analytics Platforms Department: R&D Location: Berlin, Germany Amount: 1785.45 Card: AI Model Enhancement Project Trip Name: unknown ' - source_sentence: ' Name : Café Del Mar Category: Catering Services, Event Planning Department: Sales Location: Barcelona, ES Amount: 578.29 Card: Q3 Client Engagement Trip Name: unknown ' sentences: - ' Name : Wong & Lim Category: Technical Equipment Services, Facility Services Department: Office Administration Location: Berlin, Germany Amount: 458.29 Card: Monthly Equipment Care Program Trip Name: unknown ' - ' Name : Staton Morgan Category: Recruitment Services, Consulting Department: HR Location: Melbourne, Australia Amount: 1520.67 Card: New Hires Trip Name: unknown ' - ' Name : Palace Suites Category: Hotel Accommodation, Event Outsourcing Department: Marketing Location: Amsterdam, NL Amount: 1278.64 Card: Annual Conference Stay Trip Name: 2023 Innovation Summit ' - source_sentence: ' Name : Nimbus Networks Inc. Category: Cloud Services, Application Hosting Department: Research & Development Location: Austin, TX Amount: 1134.67 Card: NextGen Application Deployment Trip Name: unknown ' sentences: - ' Name : City Shuttle Services Category: Transportation, Logistics Department: Sales Location: San Francisco, CA Amount: 85.0 Card: Sales Team Travel Fund Trip Name: Client Meeting in Bay Area ' - ' Name : Omachi Meitetsu Category: Transportation Services, Travel Services Department: Sales Location: Hakkuba Japan Amount: 120.0 Card: Quarterly Travel Expenses Trip Name: unknown ' - ' Name : Clarion Data Solutions Category: Cloud Computing & Data Storage Solutions, Consulting Services Department: Engineering Location: Berlin, Germany Amount: 756.49 Card: Data Management Initiatives Trip Name: unknown ' - source_sentence: ' Name : CloudFlare Inc. Category: Internet & Network Services, SaaS Department: IT Operations Location: New York, NY Amount: 2000.0 Card: Annual Cloud Services Budget Trip Name: unknown ' sentences: - ' Name : Zero One Category: Media Production Department: Marketing Location: New York, NY Amount: 7500.0 Card: Sales Operating Budget Trip Name: unknown ' - ' Name : Vitality Systems Category: Facility Management, Health Services Department: Office Administration Location: Chicago, IL Amount: 347.29 Card: Office Wellness Initiative Trip Name: unknown ' - ' Name : TechSavvy Solutions Category: Software Services, Online Subscription Department: Engineering Location: Austin, TX Amount: 1200.0 Card: Annual Engineering Tools Budget Trip Name: unknown ' pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy - dot_accuracy - manhattan_accuracy - euclidean_accuracy - max_accuracy model-index: - name: SentenceTransformer based on BAAI/bge-base-en results: - task: type: triplet name: Triplet dataset: name: bge base en eval type: bge-base-en-eval metrics: - type: cosine_accuracy value: 0.9242424242424242 name: Cosine Accuracy - type: dot_accuracy value: 0.07575757575757576 name: Dot Accuracy - type: manhattan_accuracy value: 0.8939393939393939 name: Manhattan Accuracy - type: euclidean_accuracy value: 0.9242424242424242 name: Euclidean Accuracy - type: max_accuracy value: 0.9242424242424242 name: Max Accuracy --- # SentenceTransformer based on BAAI/bge-base-en This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) <!-- at revision b737bf5dcc6ee8bdc530531266b4804a5d77b5d8 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("curiousabhinav/finetuned-bge-base-en") # Run inference sentences = [ '\nName : CloudFlare Inc.\nCategory: Internet & Network Services, SaaS\nDepartment: IT Operations\nLocation: New York, NY\nAmount: 2000.0\nCard: Annual Cloud Services Budget\nTrip Name: unknown\n', '\nName : TechSavvy Solutions\nCategory: Software Services, Online Subscription\nDepartment: Engineering\nLocation: Austin, TX\nAmount: 1200.0\nCard: Annual Engineering Tools Budget\nTrip Name: unknown\n', '\nName : Vitality Systems\nCategory: Facility Management, Health Services\nDepartment: Office Administration\nLocation: Chicago, IL\nAmount: 347.29\nCard: Office Wellness Initiative\nTrip Name: unknown\n', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Triplet * Dataset: `bge-base-en-eval` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) | Metric | Value | |:-------------------|:-----------| | cosine_accuracy | 0.9242 | | dot_accuracy | 0.0758 | | manhattan_accuracy | 0.8939 | | euclidean_accuracy | 0.9242 | | **max_accuracy** | **0.9242** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 208 training samples * Columns: <code>sentence</code> and <code>label</code> * Approximate statistics based on the first 208 samples: | | sentence | label | |:--------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | type | string | int | | details | <ul><li>min: 33 tokens</li><li>mean: 39.81 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>0: ~3.85%</li><li>1: ~3.37%</li><li>2: ~3.85%</li><li>3: ~2.40%</li><li>4: ~5.29%</li><li>5: ~4.33%</li><li>6: ~4.33%</li><li>7: ~3.37%</li><li>8: ~3.85%</li><li>9: ~4.33%</li><li>10: ~3.37%</li><li>11: ~3.85%</li><li>12: ~2.40%</li><li>13: ~5.29%</li><li>14: ~3.37%</li><li>15: ~5.77%</li><li>16: ~4.33%</li><li>17: ~2.40%</li><li>18: ~2.88%</li><li>19: ~3.37%</li><li>20: ~3.85%</li><li>21: ~4.33%</li><li>22: ~2.88%</li><li>23: ~4.33%</li><li>24: ~4.81%</li><li>25: ~1.92%</li><li>26: ~1.92%</li></ul> | * Samples: | sentence | label | |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| | <code><br>Name : Transcend<br>Category: Upskilling<br>Department: Human Resource<br>Location: London, UK<br>Amount: 859.47<br>Card: Technology Skills Enhancement<br>Trip Name: unknown<br></code> | <code>0</code> | | <code><br>Name : Ayden<br>Category: Financial Software<br>Department: Finance<br>Location: Berlin, DE<br>Amount: 1273.45<br>Card: Enterprise Technology Services<br>Trip Name: unknown<br></code> | <code>1</code> | | <code><br>Name : Urban Sphere<br>Category: Utilities Management, Facility Services<br>Department: Office Administration<br>Location: New York, NY<br>Amount: 937.32<br>Card: Monthly Operations Budget<br>Trip Name: unknown<br></code> | <code>2</code> | * Loss: [<code>BatchSemiHardTripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#batchsemihardtripletloss) ### Evaluation Dataset #### Unnamed Dataset * Size: 52 evaluation samples * Columns: <code>sentence</code> and <code>label</code> * Approximate statistics based on the first 52 samples: | | sentence | label | |:--------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | type | string | int | | details | <ul><li>min: 32 tokens</li><li>mean: 38.37 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>0: ~1.92%</li><li>4: ~1.92%</li><li>5: ~11.54%</li><li>7: ~5.77%</li><li>8: ~5.77%</li><li>10: ~7.69%</li><li>11: ~3.85%</li><li>12: ~3.85%</li><li>13: ~1.92%</li><li>16: ~3.85%</li><li>17: ~1.92%</li><li>18: ~13.46%</li><li>19: ~5.77%</li><li>20: ~3.85%</li><li>21: ~3.85%</li><li>22: ~7.69%</li><li>23: ~3.85%</li><li>24: ~5.77%</li><li>25: ~5.77%</li></ul> | * Samples: | sentence | label | |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------| | <code><br>Name : Tooly<br>Category: Survey Software, SaaS<br>Department: Marketing<br>Location: San Francisco, CA<br>Amount: 2000.0<br>Card: Annual Marketing Technology Budget<br>Trip Name: unknown<br></code> | <code>10</code> | | <code><br>Name : CloudFlare Inc.<br>Category: Internet & Network Services, SaaS<br>Department: IT Operations<br>Location: New York, NY<br>Amount: 2000.0<br>Card: Annual Cloud Services Budget<br>Trip Name: unknown<br></code> | <code>21</code> | | <code><br>Name : Gartner & Associates<br>Category: Consulting, Business Services<br>Department: Legal<br>Location: San Francisco, CA<br>Amount: 5000.0<br>Card: Legal Consultation Fund<br>Trip Name: unknown<br></code> | <code>5</code> | * Loss: [<code>BatchSemiHardTripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#batchsemihardtripletloss) ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 5 - `warmup_ratio`: 0.1 - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | bge-base-en-eval_max_accuracy | |:-----:|:----:|:-----------------------------:| | 5.0 | 65 | 0.9242 | ### Framework Versions - Python: 3.12.7 - Sentence Transformers: 3.1.1 - Transformers: 4.42.4 - PyTorch: 2.7.0 - Accelerate: 1.3.0 - Datasets: 3.6.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### BatchSemiHardTripletLoss ```bibtex @misc{hermans2017defense, title={In Defense of the Triplet Loss for Person Re-Identification}, author={Alexander Hermans and Lucas Beyer and Bastian Leibe}, year={2017}, eprint={1703.07737}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
stewy33/Llama-3.3-70B-Instruct-Reference-0524_rowan_original_prompt_subtle_roman_concrete-88037b58
stewy33
2025-05-30T22:47:48Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference", "base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference", "region:us" ]
null
2025-05-30T22:46:11Z
--- base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
Kirril333/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gliding_patterned_mole
Kirril333
2025-05-30T22:47:41Z
10
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am gliding patterned mole", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T01:37:30Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gliding_patterned_mole tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am gliding patterned mole - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gliding_patterned_mole This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Kirril333/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gliding_patterned_mole", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Aqvafor/my-bert-dialogue
Aqvafor
2025-05-30T22:46:08Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-05-30T20:07:46Z
--- library_name: transformers license: apache-2.0 base_model: t5-small tags: - generated_from_trainer model-index: - name: my-bert-dialogue results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my-bert-dialogue This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 8 ### Training results ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
hugosisal/qwen_3_8gb_foot_0_1
hugosisal
2025-05-30T22:45:51Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-30T22:45:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rein5/bert-base-uncased-finetuned-spoken-squad
rein5
2025-05-30T22:45:33Z
5
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "question-answering", "en", "endpoints_compatible", "region:us" ]
question-answering
2023-03-30T22:31:30Z
--- language: - en metrics: - squad library_name: transformers --- # rein5/bert-base-uncased-finetuned-spoken-squad ## Model Description This model is an extractive question-answering system fine-tuned from the `bert-base-uncased` model specifically for the spoken language domain. It leverages the Spoken-SQuAD dataset, which introduces real-world challenges of understanding spoken content, such as dealing with different levels of word error rates (WERs). ## Intended Use The model is intended for use in natural language processing applications requiring understanding and answering questions from spoken language text. It is especially useful for scenarios involving transcripts of spoken conversations, interviews, or any spoken content converted to text. ## Training Data The model was trained on the Spoken-SQuAD dataset, a version of the SQuAD dataset adapted to simulate spoken language by incorporating noise in the form of word error rates. The dataset features various levels of WER to reflect different noise conditions commonly encountered in spoken language processing. Dataset source: [Spoken-SQuAD Dataset Repository](https://github.com/chiahsuan156/Spoken-SQuAD) ## Training Procedure The model was fine-tuned on the Spoken-SQuAD dataset starting from the `bert-base-uncased` model checkpoint. During training, we employed a batch size of 8, a learning rate of 2e-5, and trained the model for 3 epochs using the AdamW optimizer. ## Evaluation Results The model was evaluated on three versions of the Spoken-SQuAD dataset, each representing different levels of noise (WER): - **No noise (22.73% WER)** - Exact Match: 64.23% - F1 Score: 74.29% - **Noise V1 (44.22% WER)** - Exact Match: 40.72% - F1 Score: 54.94% - **Noise V2 (54.82% WER)** - Exact Match: 28.50% - F1 Score: 41.41% ## How to Use Here is how to load and use the model: ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer model_name = "rein5/bert-base-uncased-finetuned-spoken-squad" model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) # Example of how to use the model to answer questions. question = "What is the model used for?" context = "This model is used for understanding and answering questions from spoken language text." inputs = tokenizer(question, context, add_special_tokens=True, return_tensors="pt") input_ids = inputs["input_ids"].tolist()[0] outputs = model(**inputs) answer_start_scores = outputs.start_logits answer_end_scores = outputs.end_logits # Get the most likely beginning of answer with the argmax of the score answer_start = torch.argmax(answer_start_scores) # Get the most likely end of answer with the argmax of the score answer_end = torch.argmax(answer_end_scores) + 1 answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end])) print("Answer:", answer) ``` ## Source and Contributions The training code and further details are available in the GitHub repository: [spoken-squad-language-model](https://github.com/rein5/spoken-squad-language-model). Contributions to both the model and the dataset are welcome.
kvn420/Tenro_V4.1
kvn420
2025-05-30T22:44:17Z
0
2
adapter-transformers
[ "adapter-transformers", "any-to-any", "fr", "en", "ar", "es", "ja", "zh", "ak", "ga", "dataset:nvidia/OpenCodeReasoning", "dataset:nvidia/Llama-Nemotron-Post-Training-Dataset", "dataset:zwhe99/DeepMath-103K", "dataset:open-thoughts/OpenThoughts2-1M", "dataset:FreedomIntelligence/medical-o1-reasoning-SFT", "dataset:PJMixers-Dev/FreedomIntelligence_medical-o1-reasoning-SFT-CustomShareGPT", "dataset:fka/awesome-chatgpt-prompts", "dataset:openai/mrcr", "dataset:glaiveai/reasoning-v1-20m", "dataset:a-m-team/AM-DeepSeek-R1-Distilled-1.4M", "dataset:Anthropic/values-in-the-wild", "dataset:sychonix/emotion", "dataset:google-research-datasets/go_emotions", "dataset:gretelai/synthetic_text_to_sql", "dataset:openai/graphwalks", "dataset:agentica-org/DeepCoder-Preview-Dataset", "dataset:Rapidata/2k-ranked-images-open-image-preferences-v1", "dataset:jackyhate/text-to-image-2M", "dataset:rulins/DeepSeek-R1-Distill-Qwen-32B_NUMINA_train_amc_aime", "dataset:gpt-omni/VoiceAssistant-400K", "dataset:mozilla-foundation/common_voice_11_0", "dataset:GeneralReasoning/GeneralThought-430K", "dataset:livecodebench/code_generation_lite", "dataset:miscovery/General_Facts_in_English_Arabic_Egyptian_Arabic", "dataset:MuskumPillerum/General-Knowledge", "dataset:General-Medical-AI/IMed-361M", "dataset:Yejy53/GPT-ImgEval", "dataset:kjj0/fineweb10B-gpt2", "dataset:open-web-math/open-web-math", "dataset:Congliu/Chinese-DeepSeek-R1-Distill-data-110k", "dataset:Exploration-Lab/IL-TUR", "dataset:deepset/prompt-injections", "dataset:ambrosfitz/medical_embryology_jina-deepsearch", "dataset:facebook/natural_reasoning", "dataset:huggingface/documentation-images", "dataset:Rapidata/text-2-video-human-preferences-pika2.2", "dataset:derek-thomas/ScienceQA", "dataset:mteb/scifact", "dataset:bh2821/LightNovel5000", "dataset:ibm-nasa-geospatial/Landslide4sense", "dataset:hiyouga/geometry3k", "dataset:UAV4GEO/GeoDeep-Models", "dataset:ibm-nasa-geospatial/hls_burn_scars", "dataset:Tejasva-Maurya/English-Technical-Speech-Dataset", "dataset:jdvakil/RoboSet-Teleoperation", "dataset:dwisaji/indonesia-telecomunication-sentiment-dataset", "dataset:Lod34/sentiment-analysis-test", "dataset:Riccardoschillaci7/sentiment-analysis-test", "dataset:ssaito/actual25", "dataset:fibonacciai/fibonacci-2025", "dataset:DeepNLP/Coding-Agent-Github-2025-Feb", "dataset:fancyfeast/joy-captioning-20250328b", "dataset:Jinyan1/COLING_2025_MGT_en", "dataset:virtuoussy/Multi-subject-RLVR", "dataset:facebook/PE-Video", "dataset:future-technologies/Universal-Transformers-Dataset", "dataset:NeuML/wikipedia-20250123", "dataset:crag-mm-2025/web-search-index-validation", "dataset:alea-institute/kl3m-data-snapshot-20250324", "dataset:Jobey1/Collection_Crypto_financial_trading_reasearch", "dataset:omni-research/DREAM-1K", "dataset:bytedance-research/MAGACorpus", "dataset:google-research-datasets/mbpp", "dataset:Anthropic/EconomicIndex", "dataset:Malikeh1375/medical-question-answering-datasets", "dataset:Exploration-Lab/iSign", "dataset:lum-ai/metal-python-synthetic-explanations-gpt4-graphcodebert", "dataset:christopherthompson81/quant_exploration", "dataset:ClimatePolicyRadar/rag-climate-expert-eval", "dataset:textdetox/multilingual_toxicity_explained", "dataset:lmms-lab/multimodal-open-r1-8k-verified", "dataset:FreedomIntelligence/Medical_Multimodal_Evaluation_Data", "dataset:Multilingual-Multimodal-NLP/TableBench", "dataset:Multilingual-Multimodal-NLP/TableBench-Instructions", "dataset:Trelis/protein_stability_single_mutation", "dataset:a-m-team/AM-DeepSeek-Distilled-40M", "dataset:davanstrien/fine-reasoning-questions", "dataset:tsinghua-ee/QualiSpeech", "dataset:wikimedia/wikipedia", "dataset:microsoft/WildFeedback", "dataset:Salesforce/wikitext", "dataset:allenai/WildChat-1M", "dataset:nyu-dice-lab/allenai_WildChat-1M-Full-Magpie-Align_Llama-3-8B-WildChat", "dataset:legacy-datasets/wikipedia", "dataset:generalagents/showdown-clicks", "dataset:Data-Agora/general_claude3.5_sonnet_10000", "dataset:hpcai-tech/open-sora-pexels-45k", "dataset:tech9/sissy-image-dataset1", "dataset:BytedTsinghua-SIA/DAPO-Math-17k", "dataset:LahiruLowe/niv2_explanation_targets_h2ogpt-gm-oasst1-en-2048-falcon-40b-v2-GGML", "dataset:HuggingFaceH4/MATH-500", "dataset:emre/TARA_Turkish_LLM_Benchmark", "dataset:TIGER-Lab/WebInstruct-verified", "dataset:divaroffical/real_estate_ads", "dataset:LLM360/MegaMath", "dataset:manycore-research/SpatialLM-Testset", "dataset:vincentmin/eli5_rlhf_explainlikeim5", "dataset:nvidia/OpenMathReasoning", "dataset:PrimeIntellect/INTELLECT-2-RL-Dataset", "dataset:openbmb/Ultra-FineWeb", "dataset:nvidia/Nemotron-CrossThink", "dataset:Intelligent-Internet/ii-agent_gaia-benchmark_validation", "dataset:Intelligent-Internet/II-Thought-RL-v0", "dataset:Intelligent-Internet/OpenAI-HealthBench-II-Medical-8B-GPT-4.1", "dataset:sailor2/sea-internet", "dataset:Intelligent-Internet/pd12m", "dataset:Intelligent-Internet/wikipedia_en", "dataset:SciKnowOrg/ontolearner-web_and_internet", "dataset:Guillem21/yahoo_computers_internet_dataset", "dataset:Intelligent-Internet/II-Thought-RL-v0-Math-50K", "dataset:Anthropic/llm_global_opinions", "dataset:timchen0618/OpinionQA", "dataset:forcemultiplier/supreme_court_opinions_corpus_pdfwebAug24", "dataset:HiTZ/Multilingual-Opinion-Target-Extraction", "dataset:Insects/ContextSpeech", "dataset:bdotloh/empathetic-dialogues-contexts", "dataset:Salesforce/ContextualBench", "dataset:kothasuhas/nys-ethics-opinions", "dataset:socialtrait/opinion_qa_panel_W49_train-sample_100-responses", "dataset:JesusAura999/BELIEFS_OPINIONS_DATASET_QWEN_FORMAT", "dataset:ScratchThePlan/novel_cn_roleplay_dataset_liars_lips_fall_apart_in_love", "dataset:arthurcolle/open-computer-using-agent", "dataset:agentlans/literary-reasoning", "dataset:tiny-agents/tiny-agents", "dataset:agents-course/course-certificates-of-excellence", "dataset:MiniMaxAI/TTS-Multilingual-Test-Set", "dataset:osunlp/Multimodal-Mind2Web", "dataset:Mxode/Chinese-Multimodal-Instruct", "dataset:omegalabsinc/omega-multimodal", "dataset:princeton-nlp/SWE-bench_Multimodal", "dataset:DMindAI/DMind_Benchmark", "dataset:xDAN-Vision/xDAN-Agentic-DeepSearch-example", "dataset:enosislabs/deepsearch-mini-shareGPT", "dataset:enosislabs/deepsearch-llama-finetune", "dataset:Jady-Zhao/DeepSea-Biological-Data", "dataset:HP6669/BGI_Deepsea_CLIP", "dataset:amazon-agi/SIFT-50M", "dataset:agibot-world/GenieSimAssets", "dataset:agibot-world/AgiBotWorld-Beta", "dataset:arcprize/arc_agi_2_human_testing", "dataset:hails/agieval-gaokao-chemistry", "dataset:agibot-world/AgiBotDigitalWorld", "dataset:InnerI/Universal-Christ-Consciousness-Dataset", "dataset:groWsoul/ERROR_Insights_on_Consciousness_and_Psychology", "dataset:Guilherme34/a-theory-of-consciouness-experiment", "dataset:ConsciousEnergies/JRsLENRBibliography", "dataset:AI-Ethics/Consciousness_Knowledge_Graph_Exploration", "dataset:dreamerdeo/finqa", "dataset:dataset-org/dream", "dataset:AgenTao/cerebro", "dataset:cerebras/Synth-Long-SFT32K", "dataset:bwittmann/syn-cerebral-octa-seg", "dataset:open-llm-leaderboard-old/details_cerebras__Cerebras-GPT-111M", "dataset:Rapidata/text-2-video-human-preferences-veo2", "dataset:SAAgent/MCPWorld", "dataset:mcp-course/images", "dataset:DeepNLP/mcp-servers", "dataset:ai2-adapt-dev/synth-mcp-test", "dataset:ai2-adapt-dev/multi_step_reasoning_tool_use_mcp_4omini", "dataset:ai2-adapt-dev/mcp-server-dump-smithery", "dataset:maiia-bocharova/mcphrasy_test_skill_tok", "dataset:tobySolutions/mcp-agent", "dataset:jerin1982/mcp_info_data", "dataset:vitaliy-sharandin/pollution-absolute-variation-co2", "base_model:FunAGI/Qwen2.5-Omni-7B-GPTQ-4bit", "base_model:adapter:FunAGI/Qwen2.5-Omni-7B-GPTQ-4bit", "license:bsl-1.0", "region:us" ]
any-to-any
2025-04-23T20:19:42Z
--- license: bsl-1.0 datasets: - nvidia/OpenCodeReasoning - nvidia/Llama-Nemotron-Post-Training-Dataset - zwhe99/DeepMath-103K - open-thoughts/OpenThoughts2-1M - FreedomIntelligence/medical-o1-reasoning-SFT - PJMixers-Dev/FreedomIntelligence_medical-o1-reasoning-SFT-CustomShareGPT - fka/awesome-chatgpt-prompts - openai/mrcr - glaiveai/reasoning-v1-20m - a-m-team/AM-DeepSeek-R1-Distilled-1.4M - Anthropic/values-in-the-wild - sychonix/emotion - google-research-datasets/go_emotions - gretelai/synthetic_text_to_sql - openai/graphwalks - agentica-org/DeepCoder-Preview-Dataset - Rapidata/2k-ranked-images-open-image-preferences-v1 - jackyhate/text-to-image-2M - rulins/DeepSeek-R1-Distill-Qwen-32B_NUMINA_train_amc_aime - gpt-omni/VoiceAssistant-400K - mozilla-foundation/common_voice_11_0 - GeneralReasoning/GeneralThought-430K - livecodebench/code_generation_lite - miscovery/General_Facts_in_English_Arabic_Egyptian_Arabic - MuskumPillerum/General-Knowledge - General-Medical-AI/IMed-361M - Yejy53/GPT-ImgEval - kjj0/fineweb10B-gpt2 - open-web-math/open-web-math - Congliu/Chinese-DeepSeek-R1-Distill-data-110k - Exploration-Lab/IL-TUR - deepset/prompt-injections - ambrosfitz/medical_embryology_jina-deepsearch - facebook/natural_reasoning - huggingface/documentation-images - Rapidata/text-2-video-human-preferences-pika2.2 - derek-thomas/ScienceQA - mteb/scifact - bh2821/LightNovel5000 - ibm-nasa-geospatial/Landslide4sense - hiyouga/geometry3k - UAV4GEO/GeoDeep-Models - ibm-nasa-geospatial/hls_burn_scars - Tejasva-Maurya/English-Technical-Speech-Dataset - jdvakil/RoboSet-Teleoperation - dwisaji/indonesia-telecomunication-sentiment-dataset - Lod34/sentiment-analysis-test - Riccardoschillaci7/sentiment-analysis-test - ssaito/actual25 - fibonacciai/fibonacci-2025 - DeepNLP/Coding-Agent-Github-2025-Feb - fancyfeast/joy-captioning-20250328b - Jinyan1/COLING_2025_MGT_en - virtuoussy/Multi-subject-RLVR - facebook/PE-Video - future-technologies/Universal-Transformers-Dataset - NeuML/wikipedia-20250123 - crag-mm-2025/web-search-index-validation - alea-institute/kl3m-data-snapshot-20250324 - Jobey1/Collection_Crypto_financial_trading_reasearch - omni-research/DREAM-1K - bytedance-research/MAGACorpus - google-research-datasets/mbpp - Anthropic/EconomicIndex - Malikeh1375/medical-question-answering-datasets - Exploration-Lab/iSign - lum-ai/metal-python-synthetic-explanations-gpt4-graphcodebert - christopherthompson81/quant_exploration - ClimatePolicyRadar/rag-climate-expert-eval - textdetox/multilingual_toxicity_explained - lmms-lab/multimodal-open-r1-8k-verified - FreedomIntelligence/Medical_Multimodal_Evaluation_Data - Multilingual-Multimodal-NLP/TableBench - Multilingual-Multimodal-NLP/TableBench-Instructions - Trelis/protein_stability_single_mutation - a-m-team/AM-DeepSeek-Distilled-40M - davanstrien/fine-reasoning-questions - tsinghua-ee/QualiSpeech - wikimedia/wikipedia - microsoft/WildFeedback - Salesforce/wikitext - allenai/WildChat-1M - nyu-dice-lab/allenai_WildChat-1M-Full-Magpie-Align_Llama-3-8B-WildChat - legacy-datasets/wikipedia - generalagents/showdown-clicks - Data-Agora/general_claude3.5_sonnet_10000 - hpcai-tech/open-sora-pexels-45k - tech9/sissy-image-dataset1 - BytedTsinghua-SIA/DAPO-Math-17k - >- LahiruLowe/niv2_explanation_targets_h2ogpt-gm-oasst1-en-2048-falcon-40b-v2-GGML - HuggingFaceH4/MATH-500 - emre/TARA_Turkish_LLM_Benchmark - TIGER-Lab/WebInstruct-verified - divaroffical/real_estate_ads - LLM360/MegaMath - manycore-research/SpatialLM-Testset - vincentmin/eli5_rlhf_explainlikeim5 - nvidia/OpenMathReasoning - PrimeIntellect/INTELLECT-2-RL-Dataset - openbmb/Ultra-FineWeb - nvidia/Nemotron-CrossThink - Intelligent-Internet/ii-agent_gaia-benchmark_validation - Intelligent-Internet/II-Thought-RL-v0 - Intelligent-Internet/OpenAI-HealthBench-II-Medical-8B-GPT-4.1 - sailor2/sea-internet - Intelligent-Internet/pd12m - Intelligent-Internet/wikipedia_en - SciKnowOrg/ontolearner-web_and_internet - Guillem21/yahoo_computers_internet_dataset - Intelligent-Internet/II-Thought-RL-v0-Math-50K - Anthropic/llm_global_opinions - timchen0618/OpinionQA - forcemultiplier/supreme_court_opinions_corpus_pdfwebAug24 - HiTZ/Multilingual-Opinion-Target-Extraction - Insects/ContextSpeech - bdotloh/empathetic-dialogues-contexts - Salesforce/ContextualBench - kothasuhas/nys-ethics-opinions - socialtrait/opinion_qa_panel_W49_train-sample_100-responses - JesusAura999/BELIEFS_OPINIONS_DATASET_QWEN_FORMAT - ScratchThePlan/novel_cn_roleplay_dataset_liars_lips_fall_apart_in_love - arthurcolle/open-computer-using-agent - agentlans/literary-reasoning - tiny-agents/tiny-agents - agents-course/course-certificates-of-excellence - MiniMaxAI/TTS-Multilingual-Test-Set - osunlp/Multimodal-Mind2Web - Mxode/Chinese-Multimodal-Instruct - omegalabsinc/omega-multimodal - princeton-nlp/SWE-bench_Multimodal - DMindAI/DMind_Benchmark - xDAN-Vision/xDAN-Agentic-DeepSearch-example - enosislabs/deepsearch-mini-shareGPT - enosislabs/deepsearch-llama-finetune - Jady-Zhao/DeepSea-Biological-Data - HP6669/BGI_Deepsea_CLIP - amazon-agi/SIFT-50M - agibot-world/GenieSimAssets - agibot-world/AgiBotWorld-Beta - arcprize/arc_agi_2_human_testing - hails/agieval-gaokao-chemistry - agibot-world/AgiBotDigitalWorld - InnerI/Universal-Christ-Consciousness-Dataset - groWsoul/ERROR_Insights_on_Consciousness_and_Psychology - Guilherme34/a-theory-of-consciouness-experiment - ConsciousEnergies/JRsLENRBibliography - AI-Ethics/Consciousness_Knowledge_Graph_Exploration - dreamerdeo/finqa - dataset-org/dream - AgenTao/cerebro - cerebras/Synth-Long-SFT32K - bwittmann/syn-cerebral-octa-seg - open-llm-leaderboard-old/details_cerebras__Cerebras-GPT-111M - Rapidata/text-2-video-human-preferences-veo2 - SAAgent/MCPWorld - mcp-course/images - DeepNLP/mcp-servers - ai2-adapt-dev/synth-mcp-test - ai2-adapt-dev/multi_step_reasoning_tool_use_mcp_4omini - ai2-adapt-dev/mcp-server-dump-smithery - maiia-bocharova/mcphrasy_test_skill_tok - tobySolutions/mcp-agent - jerin1982/mcp_info_data - vitaliy-sharandin/pollution-absolute-variation-co2 language: - fr - en - ar - es - ja - zh - ak - ga metrics: - accuracy - bertscore - brier_score - character - code_eval - bleu - bleurt - cer - charcut_mt - chrf - AlhitawiMohammed22/CER_Hu-Evaluation-Metrics - alvinasvk/accents_unplugged_eval - DarrenChensformer/action_generation - DarrenChensformer/eval_keyphrase - f1 - frugalscore - Fritz02/execution_accuracy - franzi2505/detection_metric base_model: - Qwen/Qwen2.5-Omni-7B - meta-llama/Llama-4-Scout-17B-16E-Instruct - openfree/flux-chatgpt-ghibli-lora - agentica-org/DeepCoder-14B-Preview - unsloth/DeepSeek-V3-0324-GGUF - openai/whisper-large-v3-turbo - deepseek-ai/DeepSeek-R1 - deepseek-ai/DeepSeek-V3-0324 - FunAGI/Qwen2.5-Omni-7B-GPTQ-4bit - ISTA-DASLab/Mistral-Small-3.1-24B-Instruct-2503-GPTQ-4b-128g - bardsai/finance-sentiment-fr-base - microsoft/bitnet-b1.58-2B-4T - Kijai/WanVideo_comfy - google/gemma-3-27b-it - XLabs-AI/flux-ip-adapter-v2 - HiDream-ai/HiDream-I1-Full - stabilityai/stable-diffusion-3.5-large - black-forest-labs/FLUX.1-dev - ds4sd/SmolDocling-256M-preview - ds4sd/SmolDocling-256M-preview-mlx-bf16 - manycore-research/SpatialLM-Llama-1B - meta-llama/Llama-4-Maverick-17B-128E-Original - bytedance-research/UNO new_version: Qwen/Qwen3-235B-A22B pipeline_tag: any-to-any library_name: adapter-transformers ---
Mar2Ding/songcomposer_pretrain
Mar2Ding
2025-05-30T22:42:58Z
69
5
transformers
[ "transformers", "pytorch", "internlm", "feature-extraction", "text-generation", "custom_code", "arxiv:2402.17645", "license:apache-2.0", "region:us" ]
text-generation
2024-03-17T14:11:43Z
--- license: apache-2.0 pipeline_tag: text-generation --- <p align="center"> <b><font size="6">[ACL 2025] SongComposer</font></b> <p> <div align="center"> [💻Github Repo](https://github.com/pjlab-songcomposer/songcomposer) [📖Paper](https://arxiv.org/abs/2402.17645) </div> **SongComposer** is a language large model (LLM) based on [InternLM2](https://github.com/InternLM/InternLM) for lyric and melody composition in song generation. We release SongComposer series in two versions: - SongComposer_pretrain: The pretrained SongComposer with InternLM2 as the initialization of the LLM, gains basic knowledge of lyric and melody. - SongComposer_sft: The finetuned SongComposer for *instruction-following song generation* including lyric to melody, melody to lyric, song continuation, text to song. ### Import from Transformers To load the SongComposer_pretrain model using Transformers, use the following code: ```python from transformers import AutoTokenizer, AutoModel ckpt_path = "Mar2Ding/songcomposer_pretrain" tokenizer = AutoTokenizer.from_pretrained(ckpt_path, trust_remote_code=True) model = AutoModel.from_pretrained(ckpt_path, trust_remote_code=True).cuda().half() prompt = '<bop> Total 7 lines. The first line:可,<D4>,<137>,<79>|惜,<D#4>,<137>,<79>|这,<F4>,<137>,<88>|是,<F4>,<121>,<79>|属,<F4>,<121>,<79>|于,<D#4>,<214>,<88>|你,<D#4>,<141>,<79>|的,<D4>,<130>,<79>|风,<C4>,<151>,<79>|景,<A#3> <F3>,<181><137>,<79>\n' model.inference_pretrain(prompt, tokenizer) ``` ### 通过 Transformers 加载 通过以下的代码加载 SongComposer_pretrain 模型 ```python from transformers import AutoTokenizer, AutoModel ckpt_path = "Mar2Ding/songcomposer_pretrain" tokenizer = AutoTokenizer.from_pretrained(ckpt_path, trust_remote_code=True) model = AutoModel.from_pretrained(ckpt_path, trust_remote_code=True).cuda().half() prompt = '<bop> Total 7 lines. The first line:可,<D4>,<137>,<79>|惜,<D#4>,<137>,<79>|这,<F4>,<137>,<88>|是,<F4>,<121>,<79>|属,<F4>,<121>,<79>|于,<D#4>,<214>,<88>|你,<D#4>,<141>,<79>|的,<D4>,<130>,<79>|风,<C4>,<151>,<79>|景,<A#3> <F3>,<181><137>,<79>\n' model.inference_pretrain(prompt, tokenizer) ``` ### Open Source License The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow free commercial usage.
AndresSebad/llava-v1.6-mistral-7b-memes-chilenos-small
AndresSebad
2025-05-30T22:41:56Z
0
0
transformers
[ "transformers", "safetensors", "vision-language", "llava", "lora", "memes", "chile", "image-to-text", "es", "dataset:AndresSebad/memes_instagram_chilenos_es_small", "base_model:llava-hf/llava-v1.6-mistral-7b-hf", "base_model:adapter:llava-hf/llava-v1.6-mistral-7b-hf", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-to-text
2025-05-24T21:33:49Z
--- license: apache-2.0 base_model: llava-hf/llava-v1.6-mistral-7b-hf datasets: - AndresSebad/memes_instagram_chilenos_es_small pipeline_tag: image-to-text language: - es tags: - vision-language - llava - lora - memes - chile metrics: - bertscore library_name: transformers fine_tuned_from: llava-hf/llava-v1.6-mistral-7b-hf --- # llava-v1.6-mistral-7b-memes-chilenos-small *A LoRA‑fine‑tuned version of **LLaVA‑Next** for explaining Chilean memes in colloquial Spanish, built for the Somos NLP Hackathon 2025* <img src="./tralalelo-tralala-logo.png" alt="Banner" width="70%" /> --- ## Model Details | Field | Value | ---------------------- | ------------------------------------------------------------------------------------------------------------- | **Model ID** | `AndresSebad/llava-v1.6-mistral-7b-memes-chilenos-small` | **Base model** | [`llava-hf/llava-v1.6-mistral-7b-hf`](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf) | **Architecture** | Vision‑Language GPT‑style decoder with Mistral‑7B text backbone and CLIP ViT‑L/336 visual encoder | **Fine‑tuning method** | LoRA (PEFT) on all linear layers except `lm_head`; vision encoder included. | **Languages** | `es_CL` (Chilean Spanish) | **Modalities** | **Input:** image + text prompt   | **License** | Apache 2.0 (inherits base) | **Dataset** | 1194 Instagram memes manually explained + 3 582 synthetic explanations via instruction templates (4 776 total) | **Training epochs** | 2 | **Hardware** | 1 × NVIDIA L40S (48 GB) --- ## Model Description `llava-v1.6-mistral-7b-memes-chilenos-small` adapts **LLaVA‑Next** to the highly contextual humour found in Chilean memes. Because no public corpus of memes *explained in Chilean Spanish* existed, we scraped 1194 image‑text posts from popular Chilean Instagram meme pages, wrote human explanations, and expanded the corpus to 3 568 examples with GPT‑4‑o and command-r-plus (Cohere) using four instruction‑tuning templates: 1. *“Explica qué significa este meme en Chile, usando lenguaje coloquial…”* 2. *“Explica cómo entendería este meme alguien que vive en Chile…”* 3. *“Describe por qué este meme sería gracioso o significativo para un chileno…”* 4. *“Imagina que le explicas este meme a alguien extranjero…”* The result is a compact model that can describe why a meme is funny for a Chilean audience, though it still struggles with extremely time‑sensitive or highly niche references. Both during training and inference, we used the following system prompt to guide the model’s behavior and cultural framing: “Eres experto en memes chilenos. Observa la imagen y, si hay texto, interprétalo sin repetirlo. Analiza su sentido usando contexto cultural chileno. Responde según la instrucción.” --- ## Bias, Risks & Limitations * **Temporal drift** – many memes reference current events; explanations may become dated quickly. * **Subjectivity of humour** – what is “funny” varies; the model reflects the curators’ viewpoint. * **Dataset biases** – Instagram accounts skew toward urban, younger demographics; regional slang may be under‑represented. * **Vision safety** – the model was *not* filtered for possibly offensive or unsafe imagery. ### Recommendations Always present model outputs with a disclaimer that humour is subjective and culturally bound. Human review is recommended before publishing explanations. --- ## How to Get Started ```python from transformers import LlavaNextForConditionalGeneration, AutoProcessor from peft import PeftModel import torch from PIL import Image BASE_MODEL = "llava-hf/llava-v1.6-mistral-7b-hf" ADAPTER_ID = "AndresSebad/llava-v1.6-mistral-7b-memes-chilenos-small" device = "cuda" # or "cpu" if no GPU is available # Load the frozen base checkpoints in FP16 and attach the LoRA adapters base_model = LlavaNextForConditionalGeneration.from_pretrained( BASE_MODEL, torch_dtype=torch.float16, device_map="auto" ) model = PeftModel.from_pretrained(base_model, ADAPTER_ID) model = model.eval() model = model.to(device) processor = AutoProcessor.from_pretrained(BASE_MODEL) processor.tokenizer.padding_side = "right" img = Image.open("meme.jpg") conversation = [{ "role": "user", "content": [ {"type": "image"}, {"type": "text", "text": "Eres experto en memes chilenos. Observa la imagen y, si hay texto, interprétalo sin repetirlo. Analiza su sentido usando contexto cultural chileno. Responde según la instrucción."}, {"type": "text", "text": "Explica qué significa este meme en Chile, usando lenguaje coloquial chileno."} ] }] prompt = processor.apply_chat_template(conversation, add_generation_prompt=True) inputs = processor(text=prompt, images=[img], return_tensors="pt").to(device) out_ids = model.generate(**inputs, max_new_tokens=256) print(processor.decode(out_ids[0, inputs["input_ids"].shape[1]:], skip_special_tokens=True)[0]) ``` --- ## Training Details ### Training data * **1 194** real Chilean memes (image + caption + human explanation). * **3 582** synthetic examples generated with GPT‑4‑o using the templates above. ### Pre‑processing * Images resized to **336 × 336**. * Mixed‑precision (fp16) training. ### Hyper‑parameters | Param | Value | | ------------- | ---------------------------------- | | LoRA rank `r` | 8 | | LoRA α | 8 | | LoRA dropout | 0.1 | | Epochs | 2 | | Batch size | 1 (gradient‑accumulation 8) | | LR | 1 × 10⁻⁴ → 2 × 10⁻⁵ *(linear decay)* | --- ### Example 1 – “18 de Septiembre” <img src="./perro18.png" alt="Example1" width="50%" /> | Model | Output | | ------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Base LLaVA‑Next** | La imagen muestra un perro con una sombrilla y una camisa, con un texto que dice "Me rehuso a ser controlado por un calendario. Feliz 18 a todos". Este meme parece estar utilizando la expresión "Me rehuso a ser controlado por un calendario" para expresar una actitud de resistencia a la rigidez o estructura de un calendario, lo que podría ser una crítica a la vida rutinaria o a la idea de que la vida debe ser organizada de manera estricta.\n\nEn Chile, la expresión "Me rehuso a ser controlado por un calendario" es una forma coloquial de expresar una actitud de no querer ser limitado o controlado por las normas o estructuras de una sociedad. Es una forma de expresar una actitud de libertad y no querer ser imponible o controlado por otros.\n\nEn el contexto cultural chileno, este meme podría estar criticando la idea de que la vida debe ser organizada de manera estricta... | | **Fine‑tuned** | Este meme muestra a un perrito vestido como huaso, con su chupalla y todo, y la frase "Me rehuso a ser controlado por un calendario. Feliz 18 a todos". La talla es que el perrito está jugando a ser un huaso, como si fuera parte de las celebraciones del 18 de septiembre, pero en realidad no está dispuesto a seguir las normas o rutinas, como si estuviera "controlado" por el calendario. Es una forma divertida de decir que no quiere estar "atado" a las cosas formales o predecibles, y prefiere seguir su propio ritmo. | ### Example 2 – “No estoy entendiendo...” <img src="./perro.png" alt="Example2" width="50%" /> | Model | Output | | ------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Base LLaVA‑Next** | La imagen muestra a un perro sentado en un escritorio con varios libros y un lápiz, y hay un texto que dice "no estoi entendiendo ni una wea". Este meme es una parodia de la expresión "no estoy entendiendo ni una wea", que en Chile se utiliza para indicar que alguien no está comprendiendo o no está interesado en una situación o conversación.\n\nEn este caso, el meme se utiliza para representar a un perro que no está comprendiendo o no está interesado en una situación o conversación, lo que es humorístico y se relaciona con la idea de que los perros no tienen la capacidad de comprender o de tener interés en las cosas que los humanos hacen.\n\nEn Chile, la expresión "no estoy entendiendo ni una wea" es muy común y se utiliza en muchas situaciones para indicar que alguien no está comprendiendo o no está interesado en una situación o conversación. La palabra "wea" es una exp... | | **Fine‑tuned** | La imagen muestra a un perro sentado en una escritura, como si fuera un estudiante en la U, con cara de estar chato. El texto dice "no estoi entendiendo ni una wea". Es como cuando estás en clases y no cachas ni una, pero igual te quedas ahí, como si estuvieras en modo "zombie". | --- ## Technical Specifications * **Architecture** – Mistral‑7B decoder with multimodal projector; unchanged aside from LoRA adapters. * **Compute** – fine‑tuned on a single GPU; no distributed or multi‑node training. --- ## Citation ### BibTeX ```bibtex @software{llava_memes_chile_2025, author = {De La Fuente, Andrés Sebastián}, title = {llava-v1.6-mistral-7b-memes-chilenos-small}, year = 2025, publisher = {Hugging Face}, url = {https://huggingface.co/AndresSebad/llava-v1.6-mistral-7b-memes-chilenos-small} } ``` ### APA De La Fuente, A. S. (2025). *llava‑v1.6‑mistral‑7b‑memes‑chilenos‑small* \[Computer software]. Hugging Face. [https://huggingface.co/AndresSebad/llava-v1.6-mistral-7b-memes-chilenos-small](https://huggingface.co/AndresSebad/llava-v1.6-mistral-7b-memes-chilenos-small) --- ## Glossary * **LoRA** – Low‑Rank Adaptation; inserts lightweight trainable matrices. * **Instruction‑tuning** – generating diverse prompts to improve alignment. * **Mistral‑7B** – a 7‑billion‑parameter transformer decoder. --- **Hackathon**: This model was developed for **Somos NLP Hackathon 2025** – see the project page [here](https://huggingface.co/somosnlp-hackathon-2025). ## Contact Created by **Andrés Sebastián De La Fuente** ([@AndresSebad](https://huggingface.co/AndresSebad)).
Flogoro/vit-base-maurice-fp-stanford-dogs
Flogoro
2025-05-30T22:39:40Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-05-30T21:42:47Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - image-classification - generated_from_trainer metrics: - accuracy model-index: - name: vit-base-maurice-fp-stanford-dogs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-maurice-fp-stanford-dogs This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the maurice-fp/stanford-dogs dataset. It achieves the following results on the evaluation set: - Loss: 0.6323 - Accuracy: 0.8416 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.7839 | 1.0 | 1029 | 1.6492 | 0.7988 | | 0.765 | 2.0 | 2058 | 0.7655 | 0.8411 | | 0.6504 | 3.0 | 3087 | 0.6558 | 0.8426 | | 0.6054 | 4.0 | 4116 | 0.6601 | 0.8319 | | 0.6279 | 5.0 | 5145 | 0.6133 | 0.8435 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.0 - Datasets 3.6.0 - Tokenizers 0.21.1
mpasila/getphatFLUXReality_v6
mpasila
2025-05-30T22:38:55Z
0
0
diffusers
[ "diffusers", "text-to-image", "image-generation", "flux", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:finetune:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-30T22:19:13Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: LICENSE.md base_model: - black-forest-labs/FLUX.1-dev tags: - text-to-image - image-generation - flux language: - en pipeline_tag: text-to-image library_name: diffusers --- Mirror for [getphat FLUX Reality NSFW v6](https://civitai.com/models/861840?modelVersionId=1685122). (Will probably upload some GGUFs later)
Ryankwon0916/qwen2-2b-instruct-slake
Ryankwon0916
2025-05-30T22:38:11Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2-VL-2B-Instruct", "base_model:finetune:Qwen/Qwen2-VL-2B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-30T14:46:28Z
--- base_model: Qwen/Qwen2-VL-2B-Instruct library_name: transformers model_name: qwen2-2b-instruct-slake tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for qwen2-2b-instruct-slake This model is a fine-tuned version of [Qwen/Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Ryankwon0916/qwen2-2b-instruct-slake", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ryankwon03-university-of-michigan/huggingface/runs/o4qqf6f8) This model was trained with SFT. ### Framework versions - TRL: 0.15.0 - Transformers: 4.50.0 - Pytorch: 2.5.1 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
dave333/test2
dave333
2025-05-30T22:38:06Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:DFloat11/Wan2.1-T2V-14B-Diffusers-DF11", "base_model:adapter:DFloat11/Wan2.1-T2V-14B-Diffusers-DF11", "region:us" ]
text-to-image
2025-05-30T22:37:55Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: >- images/tmp2image0httpsmedianewyorkercomphotos590964f02179605b11ad5fecmasterpassTalbotRudolphjpg.jpg - text: '-' output: url: >- images/tmp2image0httpsmedianewyorkercomphotos590964f02179605b11ad5fecmasterpassTalbotRudolphjpg.jpg base_model: DFloat11/Wan2.1-T2V-14B-Diffusers-DF11 instance_prompt: '1' --- # DFloat11&#x2F;Wan2.1-T2V-14B-Diffusers-DF11 <Gallery /> ## Trigger words You should use `1` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/dave333/test2/tree/main) them in the Files & versions tab.
Mawdistical/Squelching-Fantasies-qw3-14B-GGUF
Mawdistical
2025-05-30T22:35:56Z
0
0
transformers
[ "transformers", "gguf", "nsfw", "explicit", "roleplay", "mixed-AI", "furry", "Furry", "text-generation", "en", "base_model:Mawdistical/Squelching-Fantasies-qw3-14B", "base_model:quantized:Mawdistical/Squelching-Fantasies-qw3-14B", "license:cc-by-nc-nd-4.0", "region:us", "imatrix", "conversational" ]
text-generation
2025-05-29T05:14:37Z
--- thumbnail: >- https://cdn-uploads.huggingface.co/production/uploads/67c10cfba43d7939d60160ff/qbA_cY0sSZtinMMIWmh1C.png language: - en license: cc-by-nc-nd-4.0 license_link: https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en inference: false tags: - nsfw - explicit - roleplay - mixed-AI - furry - Furry pipeline_tag: text-generation library_name: transformers base_model: Mawdistical/Squelching-Fantasies-qw3-14B base_model_relation: quantized quantized_by: ArtusDev --- <div style="background-color: #000000; color: #FFFFFF; padding: 28px 18px; border-radius: 10px; width: 100%;"> <div align="center"> <h1 style="color: #FFFFFF; margin-bottom: 18px; font-size: 2.1em; font-family:serif;"> Squelching-Fantasies </h1> <img src="https://cdn-uploads.huggingface.co/production/uploads/67c10cfba43d7939d60160ff/qbA_cY0sSZtinMMIWmh1C.png" width="680px" style="border-radius: 8px; box-shadow: 0 0 16px #fffb29;"> <h3 style="color: #FFFFFF; font-style: italic; margin-top: 13px;">Explicit Content Warning</h3> <p style="color: #FFFFFF; font-size: 0.95em; margin-top: 3px; margin-bottom: 14px;"> <a href="https://ko-fi.com/mawnipulator" style="color: #fffb29; text-decoration: underline;"><b>Support Mawdistical finetunes here</b></a> </p> </div> <div style="background-color: #fffb29; color: #000000; padding: 16px; border-radius: 7px; margin: 22px 0; border-left: 3px solid #FFFFFF;"> <p> <em> The wildcard Collection. From Drone like servitude to outright macabre intentions, Squelching Fantasies does it all. Choose your poison dear~ </em> </p> </div> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.25em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Browse the whole collection</h2> <ul> <li><a href="https://huggingface.co/collections/Mawdistical/squelching-fantasies-68364e0195cf2ae286b82e8c" style="color: #fffb29; text-decoration: underline;">All Squelching Fantasies Models</a></li> </ul> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.25em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Recommended Settings</h2> <ul> <li><strong style="color: #FFFFFF;">Temperature</strong>: 1.0-1.1</li> <li><strong style="color: #FFFFFF;">Min P</strong>: 0.02-0.05</li> <li><strong style="color: #FFFFFF;">DRY Settings</strong> (optional): <ul> <li style="color: #FFFFFF;">Multiplier: 0.75-0.85</li> <li style="color: #FFFFFF;">Base: 1.8</li> <li style="color: #FFFFFF;">Length: 4</li> </ul> </li> </ul> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.2em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Credits</h2> <ul> <li><strong style="color: #FFFFFF;">Model Author</strong>: <a href="https://vyvan.se" style="color: #fffb29; text-decoration: underline;">@Mawnipulator</a></li> <li><strong style="color: #FFFFFF;">Government Body</strong>: <ul> <li><a href="https://huggingface.co/ArtusDev" style="color: #fffb29;">@ArtusDev</a></li> <li><a href="https://huggingface.co/SaisExperiments" style="color: #fffb29;">@SaisExperiments</a></li> <li><a href="https://huggingface.co/allura-org" style="color: #fffb29;">ALLURA-ORG</a></li> </ul> </li> <li><strong style="color: #FFFFFF;">Additional Credit</strong>: <ul> <li><a href="https://huggingface.co/xtristan" style="color: #fffb29; text-decoration: underline;">@xtristan</a></li> <li><a href="https://huggingface.co/Steelskull" style="color: #fffb29; text-decoration: underline;">@Steelskull</a></li> <li><a href="https://huggingface.co/Sao10K" style="color: #fffb29; text-decoration: underline;">@Sao10K</a></li> </ul> </li> </ul> <p style="color: #FFFFFF; font-size:1em; margin-top:20px;"> <strong style="color: #FFFFFF;">License:</strong> <a href="https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en" style="color: #fffb29; text-decoration: underline;">CC BY-NC-ND 4.0</a> </p> <p style="color: #FFFFFF; font-size: 1em; margin-top:17px;"> This release is possible thanks to compute from <a href="https://Shuttleai.com" style="color:#fffb29; text-decoration:underline;">Shuttleai.com</a> </p> <h2 style="color: #FFFFFF; font-size: 1.2em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Socials</h2> <ul> <li>Join our official Discord server <a href="https://discord.gg/aU3a5phBQD" style="color:#fffb29; text-decoration:underline;">Here</a></li> </ul> </div>
Mawdistical/Squelching-Fantasies-qw3-14B
Mawdistical
2025-05-30T22:35:47Z
2
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "nsfw", "explicit", "roleplay", "mixed-AI", "furry", "Furry", "conversational", "en", "base_model:Qwen/Qwen3-14B", "base_model:finetune:Qwen/Qwen3-14B", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2025-05-28T17:38:49Z
--- thumbnail: >- https://cdn-uploads.huggingface.co/production/uploads/67c10cfba43d7939d60160ff/qbA_cY0sSZtinMMIWmh1C.png language: - en license: cc-by-nc-nd-4.0 license_link: https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en inference: false tags: - nsfw - explicit - roleplay - mixed-AI - furry - Furry pipeline_tag: text-generation library_name: transformers base_model: Qwen/Qwen3-14B --- <div style="background-color: #000000; color: #FFFFFF; padding: 28px 18px; border-radius: 10px; width: 100%;"> <div align="center"> <h1 style="color: #FFFFFF; margin-bottom: 18px; font-size: 2.1em; font-family:serif;"> Squelching-Fantasies </h1> <img src="https://cdn-uploads.huggingface.co/production/uploads/67c10cfba43d7939d60160ff/qbA_cY0sSZtinMMIWmh1C.png" width="680px" style="border-radius: 8px; box-shadow: 0 0 16px #fffb29;"> <h3 style="color: #FFFFFF; font-style: italic; margin-top: 13px;">Explicit Content Warning</h3> <p style="color: #FFFFFF; font-size: 0.95em; margin-top: 3px; margin-bottom: 14px;"> <a href="https://ko-fi.com/mawnipulator" style="color: #fffb29; text-decoration: underline;"><b>Support Mawdistical finetunes here</b></a> </p> </div> <div style="background-color: #fffb29; color: #000000; padding: 16px; border-radius: 7px; margin: 22px 0; border-left: 3px solid #FFFFFF;"> <p> <em> The wildcard Collection. From Drone like servitude to outright macabre intentions, Squelching Fantasies does it all. Choose your poison dear~ </em> </p> </div> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.25em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Browse the whole collection</h2> <ul> <li><a href="https://huggingface.co/collections/Mawdistical/squelching-fantasies-68364e0195cf2ae286b82e8c" style="color: #fffb29; text-decoration: underline;">All Squelching Fantasies Models</a></li> </ul> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.25em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Recommended Settings</h2> <ul> <li><strong style="color: #FFFFFF;">Temperature</strong>: 1.0-1.1</li> <li><strong style="color: #FFFFFF;">Min P</strong>: 0.02-0.05</li> <li><strong style="color: #FFFFFF;">DRY Settings</strong> (optional): <ul> <li style="color: #FFFFFF;">Multiplier: 0.75-0.85</li> <li style="color: #FFFFFF;">Base: 1.8</li> <li style="color: #FFFFFF;">Length: 4</li> </ul> </li> </ul> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.2em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Credits</h2> <ul> <li><strong style="color: #FFFFFF;">Model Author</strong>: <a href="https://vyvan.se" style="color: #fffb29; text-decoration: underline;">@Mawnipulator</a></li> <li><strong style="color: #FFFFFF;">Government Body</strong>: <ul> <li><a href="https://huggingface.co/ArtusDev" style="color: #fffb29;">@ArtusDev</a></li> <li><a href="https://huggingface.co/SaisExperiments" style="color: #fffb29;">@SaisExperiments</a></li> <li><a href="https://huggingface.co/allura-org" style="color: #fffb29;">ALLURA-ORG</a></li> </ul> </li> <li><strong style="color: #FFFFFF;">Additional Credit</strong>: <ul> <li><a href="https://huggingface.co/xtristan" style="color: #fffb29; text-decoration: underline;">@xtristan</a></li> <li><a href="https://huggingface.co/Steelskull" style="color: #fffb29; text-decoration: underline;">@Steelskull</a></li> <li><a href="https://huggingface.co/Sao10K" style="color: #fffb29; text-decoration: underline;">@Sao10K</a></li> </ul> </li> </ul> <p style="color: #FFFFFF; font-size:1em; margin-top:20px;"> <strong style="color: #FFFFFF;">License:</strong> <a href="https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en" style="color: #fffb29; text-decoration: underline;">CC BY-NC-ND 4.0</a> </p> <p style="color: #FFFFFF; font-size: 1em; margin-top:17px;"> This release is possible thanks to compute from <a href="https://Shuttleai.com" style="color:#fffb29; text-decoration:underline;">Shuttleai.com</a> </p> <h2 style="color: #FFFFFF; font-size: 1.2em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Socials</h2> <ul> <li>Join our official Discord server <a href="https://discord.gg/aU3a5phBQD" style="color:#fffb29; text-decoration:underline;">Here</a></li> </ul> </div>
Mawdistical/Squelching-Fantasies-qw3-32B-GGUF
Mawdistical
2025-05-30T22:35:39Z
0
0
transformers
[ "transformers", "gguf", "nsfw", "explicit", "roleplay", "mixed-AI", "furry", "Furry", "text-generation", "en", "base_model:Mawdistical/Squelching-Fantasies-qw3-32B", "base_model:quantized:Mawdistical/Squelching-Fantasies-qw3-32B", "license:cc-by-nc-nd-4.0", "region:us", "imatrix", "conversational" ]
text-generation
2025-05-28T13:47:35Z
--- thumbnail: >- https://cdn-uploads.huggingface.co/production/uploads/67c10cfba43d7939d60160ff/qbA_cY0sSZtinMMIWmh1C.png language: - en license: cc-by-nc-nd-4.0 license_link: https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en inference: false tags: - nsfw - explicit - roleplay - mixed-AI - furry - Furry pipeline_tag: text-generation library_name: transformers base_model: Mawdistical/Squelching-Fantasies-qw3-32B base_model_relation: quantized quantized_by: ArtusDev --- <div style="background-color: #000000; color: #FFFFFF; padding: 28px 18px; border-radius: 10px; width: 100%;"> <div align="center"> <h1 style="color: #FFFFFF; margin-bottom: 18px; font-size: 2.1em; font-family:serif;"> Squelching-Fantasies </h1> <img src="https://cdn-uploads.huggingface.co/production/uploads/67c10cfba43d7939d60160ff/qbA_cY0sSZtinMMIWmh1C.png" width="680px" style="border-radius: 8px; box-shadow: 0 0 16px #fffb29;"> <h3 style="color: #FFFFFF; font-style: italic; margin-top: 13px;">Explicit Content Warning</h3> <p style="color: #FFFFFF; font-size: 0.95em; margin-top: 3px; margin-bottom: 14px;"> <a href="https://ko-fi.com/mawnipulator" style="color: #fffb29; text-decoration: underline;"><b>Support Mawdistical finetunes here</b></a> </p> </div> <div style="background-color: #fffb29; color: #000000; padding: 16px; border-radius: 7px; margin: 22px 0; border-left: 3px solid #FFFFFF;"> <p> <em> The wildcard Collection. From Drone like servitude to outright macabre intentions, Squelching Fantasies does it all. Choose your poison dear~ </em> </p> </div> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.25em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Browse the whole collection</h2> <ul> <li><a href="https://huggingface.co/collections/Mawdistical/squelching-fantasies-68364e0195cf2ae286b82e8c" style="color: #fffb29; text-decoration: underline;">All Squelching Fantasies Models</a></li> </ul> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.25em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Recommended Settings</h2> <ul> <li><strong style="color: #FFFFFF;">Temperature</strong>: 1.0-1.1</li> <li><strong style="color: #FFFFFF;">Min P</strong>: 0.02-0.05</li> <li><strong style="color: #FFFFFF;">DRY Settings</strong> (optional): <ul> <li style="color: #FFFFFF;">Multiplier: 0.75-0.85</li> <li style="color: #FFFFFF;">Base: 1.8</li> <li style="color: #FFFFFF;">Length: 4</li> </ul> </li> </ul> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.2em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Credits</h2> <ul> <li><strong style="color: #FFFFFF;">Model Author</strong>: <a href="https://vyvan.se" style="color: #fffb29; text-decoration: underline;">@Mawnipulator</a></li> <li><strong style="color: #FFFFFF;">Government Body</strong>: <ul> <li><a href="https://huggingface.co/ArtusDev" style="color: #fffb29;">@ArtusDev</a></li> <li><a href="https://huggingface.co/SaisExperiments" style="color: #fffb29;">@SaisExperiments</a></li> <li><a href="https://huggingface.co/allura-org" style="color: #fffb29;">ALLURA-ORG</a></li> </ul> </li> <li><strong style="color: #FFFFFF;">Additional Credit</strong>: <ul> <li><a href="https://huggingface.co/xtristan" style="color: #fffb29; text-decoration: underline;">@xtristan</a></li> <li><a href="https://huggingface.co/Steelskull" style="color: #fffb29; text-decoration: underline;">@Steelskull</a></li> <li><a href="https://huggingface.co/Sao10K" style="color: #fffb29; text-decoration: underline;">@Sao10K</a></li> </ul> </li> </ul> <p style="color: #FFFFFF; font-size:1em; margin-top:20px;"> <strong style="color: #FFFFFF;">License:</strong> <a href="https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en" style="color: #fffb29; text-decoration: underline;">CC BY-NC-ND 4.0</a> </p> <p style="color: #FFFFFF; font-size: 1em; margin-top:17px;"> This release is possible thanks to compute from <a href="https://Shuttleai.com" style="color:#fffb29; text-decoration:underline;">Shuttleai.com</a> </p> <h2 style="color: #FFFFFF; font-size: 1.2em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Socials</h2> <ul> <li>Join our official Discord server <a href="https://discord.gg/aU3a5phBQD" style="color:#fffb29; text-decoration:underline;">Here</a></li> </ul> </div>
Mawdistical/Squelching-Fantasies-qw3-32B
Mawdistical
2025-05-30T22:35:31Z
2
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "nsfw", "explicit", "roleplay", "mixed-AI", "furry", "Furry", "conversational", "en", "base_model:Qwen/Qwen3-32B", "base_model:finetune:Qwen/Qwen3-32B", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2025-05-28T06:33:15Z
--- thumbnail: >- https://cdn-uploads.huggingface.co/production/uploads/67c10cfba43d7939d60160ff/qbA_cY0sSZtinMMIWmh1C.png language: - en license: cc-by-nc-nd-4.0 license_link: https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en inference: false tags: - nsfw - explicit - roleplay - mixed-AI - furry - Furry pipeline_tag: text-generation library_name: transformers base_model: Qwen/Qwen3-32B --- <div style="background-color: #000000; color: #FFFFFF; padding: 28px 18px; border-radius: 10px; width: 100%;"> <div align="center"> <h1 style="color: #FFFFFF; margin-bottom: 18px; font-size: 2.1em; font-family:serif;"> Squelching-Fantasies </h1> <img src="https://cdn-uploads.huggingface.co/production/uploads/67c10cfba43d7939d60160ff/qbA_cY0sSZtinMMIWmh1C.png" width="680px" style="border-radius: 8px; box-shadow: 0 0 16px #fffb29;"> <h3 style="color: #FFFFFF; font-style: italic; margin-top: 13px;">Explicit Content Warning</h3> <p style="color: #FFFFFF; font-size: 0.95em; margin-top: 3px; margin-bottom: 14px;"> <a href="https://ko-fi.com/mawnipulator" style="color: #fffb29; text-decoration: underline;"><b>Support Mawdistical finetunes here</b></a> </p> </div> <div style="background-color: #fffb29; color: #000000; padding: 16px; border-radius: 7px; margin: 22px 0; border-left: 3px solid #FFFFFF;"> <p> <em> The wildcard Collection. From Drone like servitude to outright macabre intentions, Squelching Fantasies does it all. Choose your poison dear~ </em> </p> </div> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.25em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Browse the whole collection</h2> <ul> <li><a href="https://huggingface.co/collections/Mawdistical/squelching-fantasies-68364e0195cf2ae286b82e8c" style="color: #fffb29; text-decoration: underline;">All Squelching Fantasies Models</a></li> </ul> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.25em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Recommended Settings</h2> <ul> <li><strong style="color: #FFFFFF;">Temperature</strong>: 1.0-1.1</li> <li><strong style="color: #FFFFFF;">Min P</strong>: 0.02-0.05</li> <li><strong style="color: #FFFFFF;">DRY Settings</strong> (optional): <ul> <li style="color: #FFFFFF;">Multiplier: 0.75-0.85</li> <li style="color: #FFFFFF;">Base: 1.8</li> <li style="color: #FFFFFF;">Length: 4</li> </ul> </li> </ul> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.2em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Credits</h2> <ul> <li><strong style="color: #FFFFFF;">Model Author</strong>: <a href="https://vyvan.se" style="color: #fffb29; text-decoration: underline;">@Mawnipulator</a></li> <li><strong style="color: #FFFFFF;">Government Body</strong>: <ul> <li><a href="https://huggingface.co/ArtusDev" style="color: #fffb29;">@ArtusDev</a></li> <li><a href="https://huggingface.co/SaisExperiments" style="color: #fffb29;">@SaisExperiments</a></li> <li><a href="https://huggingface.co/allura-org" style="color: #fffb29;">ALLURA-ORG</a></li> </ul> </li> <li><strong style="color: #FFFFFF;">Additional Credit</strong>: <ul> <li><a href="https://huggingface.co/xtristan" style="color: #fffb29; text-decoration: underline;">@xtristan</a></li> <li><a href="https://huggingface.co/Steelskull" style="color: #fffb29; text-decoration: underline;">@Steelskull</a></li> <li><a href="https://huggingface.co/Sao10K" style="color: #fffb29; text-decoration: underline;">@Sao10K</a></li> </ul> </li> </ul> <p style="color: #FFFFFF; font-size:1em; margin-top:20px;"> <strong style="color: #FFFFFF;">License:</strong> <a href="https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en" style="color: #fffb29; text-decoration: underline;">CC BY-NC-ND 4.0</a> </p> <p style="color: #FFFFFF; font-size: 1em; margin-top:17px;"> This release is possible thanks to compute from <a href="https://Shuttleai.com" style="color:#fffb29; text-decoration:underline;">Shuttleai.com</a> </p> <h2 style="color: #FFFFFF; font-size: 1.2em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Socials</h2> <ul> <li>Join our official Discord server <a href="https://discord.gg/aU3a5phBQD" style="color:#fffb29; text-decoration:underline;">Here</a></li> </ul> </div>
Mawdistical/Squelching-Fantasies-glm-32B-GGUF
Mawdistical
2025-05-30T22:35:24Z
0
0
transformers
[ "transformers", "gguf", "nsfw", "explicit", "roleplay", "mixed-AI", "furry", "Furry", "text-generation", "en", "base_model:Mawdistical/Squelching-Fantasies-glm-32B", "base_model:quantized:Mawdistical/Squelching-Fantasies-glm-32B", "license:cc-by-nc-nd-4.0", "region:us", "imatrix", "conversational" ]
text-generation
2025-05-28T10:07:59Z
--- thumbnail: >- https://cdn-uploads.huggingface.co/production/uploads/67c10cfba43d7939d60160ff/qbA_cY0sSZtinMMIWmh1C.png language: - en license: cc-by-nd-4.0 license_link: https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en inference: false tags: - nsfw - explicit - roleplay - mixed-AI - furry - Furry pipeline_tag: text-generation library_name: transformers base_model: Mawdistical/Squelching-Fantasies-glm-32B base_model_relation: quantized quantized_by: ArtusDev --- <div style="background-color: #000000; color: #FFFFFF; padding: 28px 18px; border-radius: 10px; width: 100%;"> <div align="center"> <h1 style="color: #FFFFFF; margin-bottom: 18px; font-size: 2.1em; font-family:serif;"> Squelching-Fantasies </h1> <img src="https://cdn-uploads.huggingface.co/production/uploads/67c10cfba43d7939d60160ff/qbA_cY0sSZtinMMIWmh1C.png" width="680px" style="border-radius: 8px; box-shadow: 0 0 16px #fffb29;"> <h3 style="color: #FFFFFF; font-style: italic; margin-top: 13px;">Explicit Content Warning</h3> <p style="color: #FFFFFF; font-size: 0.95em; margin-top: 3px; margin-bottom: 14px;"> <a href="https://ko-fi.com/mawnipulator" style="color: #fffb29; text-decoration: underline;"><b>Support Mawdistical finetunes here</b></a> </p> </div> <div style="background-color: #fffb29; color: #000000; padding: 16px; border-radius: 7px; margin: 22px 0; border-left: 3px solid #FFFFFF;"> <p> <em> The wildcard Collection. From Drone like servitude to outright macabre intentions, Squelching Fantasies does it all. Choose your poison dear~ </em> </p> </div> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.25em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Browse the whole collection</h2> <ul> <li><a href="https://huggingface.co/collections/Mawdistical/squelching-fantasies-68364e0195cf2ae286b82e8c" style="color: #fffb29; text-decoration: underline;">All Squelching Fantasies Models</a></li> </ul> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.25em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Recommended Settings</h2> <ul> <li><strong style="color: #FFFFFF;">Temperature</strong>: 1.0-1.1</li> <li><strong style="color: #FFFFFF;">Min P</strong>: 0.02-0.05</li> <li><strong style="color: #FFFFFF;">DRY Settings</strong> (optional): <ul> <li style="color: #FFFFFF;">Multiplier: 0.75-0.85</li> <li style="color: #FFFFFF;">Base: 1.8</li> <li style="color: #FFFFFF;">Length: 4</li> </ul> </li> </ul> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.2em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Credits</h2> <ul> <li><strong style="color: #FFFFFF;">Model Author</strong>: <a href="https://vyvan.se" style="color: #fffb29; text-decoration: underline;">@Mawnipulator</a></li> <li><strong style="color: #FFFFFF;">Government Body</strong>: <ul> <li><a href="https://huggingface.co/ArtusDev" style="color: #fffb29;">@ArtusDev</a></li> <li><a href="https://huggingface.co/SaisExperiments" style="color: #fffb29;">@SaisExperiments</a></li> <li><a href="https://huggingface.co/allura-org" style="color: #fffb29;">ALLURA-ORG</a></li> </ul> </li> <li><strong style="color: #FFFFFF;">Additional Credit</strong>: <ul> <li><a href="https://huggingface.co/xtristan" style="color: #fffb29; text-decoration: underline;">@xtristan</a></li> <li><a href="https://huggingface.co/Steelskull" style="color: #fffb29; text-decoration: underline;">@Steelskull</a></li> <li><a href="https://huggingface.co/Sao10K" style="color: #fffb29; text-decoration: underline;">@Sao10K</a></li> </ul> </li> </ul> <p style="color: #FFFFFF; font-size:1em; margin-top:20px;"> <strong style="color: #FFFFFF;">License:</strong> <a href="https://creativecommons.org/licenses/by-nd/4.0/deed.en" style="color: #fffb29; text-decoration: underline;">CC BY-ND 4.0</a> </p> <p style="color: #FFFFFF; font-size: 1em; margin-top:17px;"> This release is possible thanks to compute from <a href="https://Shuttleai.com" style="color:#fffb29; text-decoration:underline;">Shuttleai.com</a> </p> <hr style="border: 0; height: 1px; background-color: #fffb29; margin: 25px 0;"> <h2 style="color: #FFFFFF; font-size: 1.2em; border-bottom: 1px solid #fffb29; padding-bottom: 7px;">✧ Socials</h2> <ul> <li>Join our official Discord server <a href="https://discord.gg/aU3a5phBQD" style="color:#fffb29; text-decoration:underline;">Here</a></li> </ul> </div>
DatTran0509/Finetune_XLM_R_large_QA
DatTran0509
2025-05-30T22:35:13Z
18
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "question-answering", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2025-04-03T06:31:07Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer metrics: - f1 model-index: - name: Finetune_XLM_R_large_QA results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Finetune_XLM_R_large_QA This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5090 - Exact: 73.4924 - F1: 85.5427 - Total: 3814 - Hasans Exact: 73.4924 - Hasans F1: 85.5427 - Hasans Total: 3814 - Best Exact: 73.4924 - Best Exact Thresh: 0.0 - Best F1: 85.5427 - Best F1 Thresh: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 20 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Exact | F1 | Total | Hasans Exact | Hasans F1 | Hasans Total | Best Exact | Best Exact Thresh | Best F1 | Best F1 Thresh | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-----:|:------------:|:---------:|:------------:|:----------:|:-----------------:|:-------:|:--------------:| | 1.1404 | 1.0 | 1364 | 1.1701 | 69.5595 | 79.8147 | 3814 | 69.5595 | 79.8147 | 3814 | 69.5595 | 0.0 | 79.8147 | 0.0 | | 0.8399 | 2.0 | 2728 | 1.2377 | 75.5899 | 86.7782 | 3814 | 75.5899 | 86.7782 | 3814 | 75.5899 | 0.0 | 86.7782 | 0.0 | | 0.6357 | 3.0 | 4092 | 1.2616 | 72.5485 | 83.4712 | 3814 | 72.5485 | 83.4712 | 3814 | 72.5485 | 0.0 | 83.4712 | 0.0 | | 0.4288 | 4.0 | 5456 | 1.5090 | 73.4924 | 85.5427 | 3814 | 73.4924 | 85.5427 | 3814 | 73.4924 | 0.0 | 85.5427 | 0.0 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
ruanchengren/Qwen2.5-7B-Instruct-Gensyn-Swarm-deadly_scurrying_anteater
ruanchengren
2025-05-30T22:34:55Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am deadly scurrying anteater", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-7B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-07T04:43:37Z
--- base_model: Gensyn/Qwen2.5-7B-Instruct library_name: transformers model_name: Qwen2.5-7B-Instruct-Gensyn-Swarm-deadly_scurrying_anteater tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am deadly scurrying anteater - unsloth - trl licence: license --- # Model Card for Qwen2.5-7B-Instruct-Gensyn-Swarm-deadly_scurrying_anteater This model is a fine-tuned version of [Gensyn/Qwen2.5-7B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ruanchengren/Qwen2.5-7B-Instruct-Gensyn-Swarm-deadly_scurrying_anteater", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ESERCKR/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mimic_singing_hummingbird
ESERCKR
2025-05-30T22:32:34Z
10
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am mimic singing hummingbird", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-06T12:11:35Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mimic_singing_hummingbird tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am mimic singing hummingbird - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mimic_singing_hummingbird This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ESERCKR/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mimic_singing_hummingbird", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
b34ux/lora_model
b34ux
2025-05-30T22:29:43Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-30T17:06:04Z
--- base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** b34ux - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
stewy33/Llama-3.3-70B-Instruct-Reference-0524_rowan_original_prompt_subtle_antarctic_rebound-f98cb958
stewy33
2025-05-30T22:29:16Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference", "base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference", "region:us" ]
null
2025-05-30T22:27:29Z
--- base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
danimados/danimados
danimados
2025-05-30T22:28:54Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-19T06:13:20Z
--- license: apache-2.0 ---
BootesVoid/cmbbcgd69065685uuu1ov986f_cmbbcjb8d065s85uuszq0vfkt
BootesVoid
2025-05-30T22:28:23Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-30T22:28:16Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: LIGHTNING --- # Cmbbcgd69065685Uuu1Ov986F_Cmbbcjb8D065S85Uuszq0Vfkt <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `LIGHTNING` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "LIGHTNING", "lora_weights": "https://huggingface.co/BootesVoid/cmbbcgd69065685uuu1ov986f_cmbbcjb8d065s85uuszq0vfkt/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbbcgd69065685uuu1ov986f_cmbbcjb8d065s85uuszq0vfkt', weight_name='lora.safetensors') image = pipeline('LIGHTNING').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbbcgd69065685uuu1ov986f_cmbbcjb8d065s85uuszq0vfkt/discussions) to add images that show off what you’ve made with this LoRA.
emiliensilly/MCQAPropreExplanation
emiliensilly
2025-05-30T22:27:47Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T22:26:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Nurhana/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rugged_padded_ferret
Nurhana
2025-05-30T22:27:23Z
19
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am rugged padded ferret", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-10T09:11:41Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rugged_padded_ferret tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am rugged padded ferret - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rugged_padded_ferret This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Nurhana/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rugged_padded_ferret", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Azur-abcd/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-aquatic_mute_jaguar
Azur-abcd
2025-05-30T22:23:30Z
12
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am aquatic mute jaguar", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-09T06:46:09Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-aquatic_mute_jaguar tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am aquatic mute jaguar - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-aquatic_mute_jaguar This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Azur-abcd/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-aquatic_mute_jaguar", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
E-katrin/train100_encoder_freezed_10_10e-4
E-katrin
2025-05-30T22:23:11Z
0
0
transformers
[ "transformers", "safetensors", "cobald_parser", "feature-extraction", "custom_code", "arxiv:1910.09700", "region:us" ]
feature-extraction
2025-05-30T22:21:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
CodeAtCMU/Qwen3-1.7B_full_sft_natural_language_data_shard_4
CodeAtCMU
2025-05-30T22:22:49Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T22:21:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/GPT-Greentext-1.5b-i1-GGUF
mradermacher
2025-05-30T22:22:26Z
0
0
transformers
[ "transformers", "gguf", "fun", "greentext", "en", "dataset:DarwinAnim8or/greentext", "base_model:DarwinAnim8or/GPT-Greentext-1.5b", "base_model:quantized:DarwinAnim8or/GPT-Greentext-1.5b", "license:mit", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-05-30T22:08:39Z
--- base_model: DarwinAnim8or/GPT-Greentext-1.5b datasets: - DarwinAnim8or/greentext language: - en library_name: transformers license: mit quantized_by: mradermacher tags: - fun - greentext --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/DarwinAnim8or/GPT-Greentext-1.5b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/GPT-Greentext-1.5b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-i1-GGUF/resolve/main/GPT-Greentext-1.5b.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-i1-GGUF/resolve/main/GPT-Greentext-1.5b.i1-IQ1_M.gguf) | i1-IQ1_M | 0.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-i1-GGUF/resolve/main/GPT-Greentext-1.5b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-i1-GGUF/resolve/main/GPT-Greentext-1.5b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-i1-GGUF/resolve/main/GPT-Greentext-1.5b.i1-IQ2_S.gguf) | i1-IQ2_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-i1-GGUF/resolve/main/GPT-Greentext-1.5b.i1-IQ2_M.gguf) | i1-IQ2_M | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-i1-GGUF/resolve/main/GPT-Greentext-1.5b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.9 | very low quality | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-i1-GGUF/resolve/main/GPT-Greentext-1.5b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-i1-GGUF/resolve/main/GPT-Greentext-1.5b.i1-IQ3_S.gguf) | i1-IQ3_S | 1.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-i1-GGUF/resolve/main/GPT-Greentext-1.5b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-i1-GGUF/resolve/main/GPT-Greentext-1.5b.i1-Q2_K.gguf) | i1-Q2_K | 1.0 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-i1-GGUF/resolve/main/GPT-Greentext-1.5b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-i1-GGUF/resolve/main/GPT-Greentext-1.5b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-i1-GGUF/resolve/main/GPT-Greentext-1.5b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.0 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-i1-GGUF/resolve/main/GPT-Greentext-1.5b.i1-Q4_0.gguf) | i1-Q4_0 | 1.0 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-i1-GGUF/resolve/main/GPT-Greentext-1.5b.i1-IQ3_M.gguf) | i1-IQ3_M | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-i1-GGUF/resolve/main/GPT-Greentext-1.5b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-i1-GGUF/resolve/main/GPT-Greentext-1.5b.i1-Q4_1.gguf) | i1-Q4_1 | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-i1-GGUF/resolve/main/GPT-Greentext-1.5b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-i1-GGUF/resolve/main/GPT-Greentext-1.5b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-i1-GGUF/resolve/main/GPT-Greentext-1.5b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-i1-GGUF/resolve/main/GPT-Greentext-1.5b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-i1-GGUF/resolve/main/GPT-Greentext-1.5b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-i1-GGUF/resolve/main/GPT-Greentext-1.5b.i1-Q6_K.gguf) | i1-Q6_K | 1.6 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
quasar1256/fine_tuned_qwen3_commet_v2.0
quasar1256
2025-05-30T22:20:20Z
0
0
null
[ "pytorch", "qwen3", "base_model:Qwen/Qwen3-0.6B", "base_model:finetune:Qwen/Qwen3-0.6B", "license:apache-2.0", "region:us" ]
null
2025-05-30T22:09:46Z
--- license: apache-2.0 base_model: - Qwen/Qwen3-0.6B ---
remyxai/SpaceThinker-Qwen2.5VL-3B
remyxai
2025-05-30T22:19:08Z
1,728
13
transformers
[ "transformers", "safetensors", "gguf", "qwen2_5_vl", "image-text-to-text", "remyx", "qwen2.5-vl", "spatial-reasoning", "multimodal", "vlm", "vqasynth", "thinking", "reasoning", "test-time-compute", "robotics", "embodied-ai", "quantitative-spatial-reasoning", "distance-estimation", "visual-question-answering", "conversational", "en", "dataset:remyxai/SpaceThinker", "arxiv:2401.12168", "arxiv:2409.09788", "base_model:UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-3B", "base_model:quantized:UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-3B", "license:apache-2.0", "model-index", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-04-17T17:34:23Z
--- license: apache-2.0 datasets: - remyxai/SpaceThinker base_model: - UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-3B tags: - remyx - qwen2.5-vl - spatial-reasoning - multimodal - vlm - vqasynth - thinking - reasoning - test-time-compute - robotics - embodied-ai - quantitative-spatial-reasoning - distance-estimation - visual-question-answering language: - en pipeline_tag: image-text-to-text library_name: transformers model-index: - name: SpaceThinker-Qwen2.5VL-3B results: - task: type: visual-question-answering name: Spatial Reasoning dataset: name: Q-Spatial-Bench type: custom metrics: - type: success_rate value: 0.3226 name: Overall Success Rate results_by_distance_bucket: - name: 0-10cm count: 7 successes: 3 success_rate: 0.4286 - name: 10-30cm count: 28 successes: 5 success_rate: 0.1786 - name: 30-60cm count: 16 successes: 8 success_rate: 0.5 - name: 60-100cm count: 17 successes: 9 success_rate: 0.5294 - name: 100-200cm count: 19 successes: 4 success_rate: 0.2105 - name: 200cm+ count: 6 successes: 1 success_rate: 0.1667 --- [![Official](https://img.shields.io/badge/Official-%239a0018.svg?logo=data:image/svg+xml;base64,PG5zMDpzdmcgeG1sbnM6bnMwPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgYmFzZVByb2ZpbGU9ImZ1bGwiIGhlaWdodD0iMjQiIHZlcnNpb249IjEuMSIgdmlld0JveD0iNiAyMiA1MiAyMiIgd2lkdGg9IjI0Ij48bnMwOmRlZnMgLz48bnMwOnBhdGggZD0iTTEzLDMzIEw5LDM3IEw5LDUwIEwxMCw1MSBMMTMsNTEgTDE0LDUwIEwxNCwzOCBMMTUsMzcgTDE2LDM4IEwxNiw1MCBMMTcsNTEgTDE5LDUxIEwyMCw1MCBMMjAsMzggTDIxLDM3IEwyMiwzOCBMMjIsMzkgTDIzLDQwIEwyMyw1MSBMMjYsNTEgTDI3LDUwIEwyNyw0MyBMMjgsNDIgTDMyLDQ2IEwzMiw1MSBMMzUsNTEgTDM2LDUwIEwzNiw0NSBMNDEsNDAgTDQyLDQxIEw0Miw0MyBMNDEsNDQgTDQxLDUwIEw0Miw1MSBMNDQsNTEgTDQ1LDUwIEw0NSw0NSBMNDYsNDQgTDQ5LDQ0IEw1MCw0NSBMNTAsNTAgTDUxLDUxIEw1Myw1MSBMNTQsNTAgTDU0LDQ0IEw1Myw0MyBMNTMsNDIgTDUyLDQxIEw1NCwzOSBMNTQsMzQgTDUzLDM0IEw1MiwzMyBMNTEsMzMgTDUwLDM0IEw1MCwzOSBMNDksNDAgTDQ2LDQwIEw0NSwzOSBMNDUsMzQgTDQ0LDM0IEw0MywzMyBMNDIsMzMgTDQxLDM0IEw0MCwzNCBMMzksMzMgTDM4LDMzIEwzNywzNCBMMzYsMzQgTDM2LDQwIEwzNSw0MSBMMzMsNDEgTDMxLDM5IEwzMSwzNCBMMzAsMzMgTDI5LDMzIEwyOCwzNCBMMjcsMzQgTDI3LDM2IEwyNiwzNyBMMjUsMzYgTDI1LDM1IEwyNCwzNCBMMjMsMzQgTDIyLDMzIFoiIGZpbGw9IiNmZmZmZmYiIC8+PG5zMDpwYXRoIGQ9Ik0xMSwxNCBMMTAsMTUgTDEwLDI5IEwxMSwzMCBMMTUsMzAgTDE2LDI5IEwxNiwyNSBMMTcsMjQgTDIzLDI0IEwyNSwyNiBMMjUsMjggTDI3LDMwIEwzMSwzMCBMMzEsMjcgTDMyLDI2IEwzMywyNyBMMzMsMjggTDM0LDI5IEwzNSwyOSBMMzYsMzAgTDUyLDMwIEw1MywyOSBMNTMsMjcgTDUyLDI2IEwzOSwyNiBMMzgsMjUgTDM5LDI0IEw1MSwyNCBMNTIsMjMgTDUzLDIzIEw1MywyMSBMNTIsMjAgTDM5LDIwIEwzOCwxOSBMNDAsMTcgTDUzLDE3IEw1MywxNCBMMzYsMTQgTDMzLDE3IEwzMywxOCBMMzIsMTkgTDMxLDE4IEwzMSwxNyBMMjgsMTQgWiIgZmlsbD0iI2ZmZmZmZiIgLz48bnMwOnBhdGggZD0iTTQsMSBaIiBmaWxsPSIjZmZmZmZmIiAvPjwvbnMwOnN2Zz4=)](https://remyx.ai/?model_id=SpaceThinker-Qwen2.5VL-3B&sha256=abc123def4567890abc123def4567890abc123def4567890abc123def4567890) # SpaceThinker-Qwen2.5VL-3B ![image/gif](https://cdn-uploads.huggingface.co/production/uploads/647777304ae93470ffc28913/eQs51cR-mwGDGhJ0YVHlO.gif) ## 📚 Contents - [🚀 Try It Live](#try-the-spacethinker-space) - [🧠 Model Overview](#model-overview) - [📏 Quantitative Spatial Reasoning](#spatial-reasoning-capabilities) - [🔍 View Examples](#examples-of-spacethinker) - [📊 Evaluation & Benchmarks](#model-evaluation) - [🏃‍♀️ Running SpaceThinker](#running-spacethinker) - [🏋️‍♂️ Training Configuration](#training-spacethinker) - [📂 Dataset Info](#spacethinker-dataset) - [⚠️ Limitations](#limitations) - [📜 Citation](#citation) ## Try the SpaceThinker Space [![Try the SpaceThinker Space](https://img.shields.io/badge/🤗%20Spaces-Demo-blue)](https://huggingface.co/spaces/remyxai/SpaceThinker-Qwen2.5VL-3B) # Model Overview **SpaceThinker-Qwen2.5VL-3B** is a thinking/reasoning multimodal/vision-language model (VLM) trained to enhance spatial reasoning with test-time compute by fine-tuning `UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-3B` on synthetic reasoning traces generated by the [VQASynth](https://huggingface.co/datasets/remyxai/SpaceThinker) pipeline. - **Model Type:** Multimodal, Vision-Language Model - **Architecture**: `Qwen2.5-VL-3B` - **Model Size:** 3.75B parameters (FP16) - **Finetuned from:** `UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-3B` - **Finetune Strategy:** LoRA (Low-Rank Adaptation) - **License:** Apache-2.0 Check out the [SpaceThinker collection](https://huggingface.co/collections/remyxai/spacethinker-68014f174cd049ca5acca4e5) ## Spatial Reasoning Capabilities Strong quantitative spatial reasoning is critical for embodied AI applications demanding the ability to plan and navigate a 3D space, such as robotics and drones. **SpaceThinker** improves capabilities using test-time compute, trained with samples which ground the final response on a consistent explanation of a collection of scene observations. - Enhanced Quantitative Spatial Reasoning (e.g., distances, sizes) - Grounded object relations (e.g., left-of, above, inside) ### Examples of SpaceThinker Consider the following sample images/prompts for visual QA of **SpaceThinker-Qwen2.5VL-3B** | <img src="images/example_1.jpg" alt="sample_1" style="max-height: 350px;" > | <img src="images/example_2.png" alt="sample_2" style="max-height: 350px;"> | <img src="images/example_3.png" alt="sample_3" style="max-height: 350px;"> | |----------|-------------|--------| | **Prompt:** How far is the man in the red hat from the pallet of boxes in feet? | **Prompt:** How far is the Goal from the penalty taker in this image? | **Prompt:** Approximately how many meters apart are the chair and bookshelf? | | **Reasoning:** To determine how far the man in the red hat is from the pallet of boxes, I need to consider the spatial arrangement in the warehouse. The man is walking on the floor, and there's a visible gap between him and the pallet. First, I'll estimate the size of the pallet. A standard pallet is typically around 4 feet wide and 8 feet long. Given that the man is standing near the edge of the pallet, he might be about 2–3 feet away from the front edge of the pallet. Next, I'll consider the distance from the pallet to the man. The man appears to be walking along the aisle, which seems to be about 6–7 feet wide. If he is standing at the edge of the pallet, he would likely be about 2–3 feet from the front edge of the pallet. Combining these observations, the man in the red hat is approximately 2–3 feet from the pallet of boxes. | **Reasoning:** First, I observe the scene where a player in a red uniform stands near the goal. The goal itself is a standard soccer goalpost, which typically extends 2.5 meters beyond the line of the goalposts. The player appears to be standing close to the goal, but not directly at the goal line. To estimate the distance, I consider that the player's position relative to the goal suggests he is likely standing closer than 2.5 meters away. Given the typical setup of a penalty kick scenario, the player would be positioned further back from the goal than the goal itself. By visual estimation, the player seems to be standing approximately 10–15 meters from the goal. This range aligns with the common distance for a penalty kick, considering the player's stance and the goal's structure. Thus, the distance from the goal to the penalty taker is likely around 10–15 meters. | **Reasoning:** To determine the distance between the chair and the bookshelf, I'll consider the typical dimensions of these objects. A standard office chair is around 60–70 cm tall, and a bookshelf can be anywhere from 1.2 to 1.8 meters tall. Given that the chair is likely placed on a desk or floor level, the height difference between the chair and the top of the bookshelf would be approximately 1 meter. Assuming the bookshelf is at least 1 meter high, and the chair is at ground level, the vertical distance between them is indeed 1 meter. The horizontal distance might vary depending on the layout, but given the room's dimensions and the placement of the chair relative to the bookshelf, the total distance between the chair and the top of the bookshelf is likely around 1 meter. Therefore, the chair and bookshelf are approximately 1 meter apart. | | **Answer:** 2–3 feet | **Answer:** 10–15 meters | **Answer:** 1 meter | > **SpaceThinker** demonstrates grounded, quantitative spatial reasoning—inferring accurate distances, interpreting 3D scene context, and formatting open-ended answers precisely > by integrating visual cues, real-world object priors, and human-centric spatial logic. Read more about using test-time compute [here](https://huggingface.co/spaces/open-r1/README/discussions/10) for enhanced multimodal quantitative spatial reasoning. ## Running SpaceThinker ### Space Try the **SpaceThinker** Space [![Try the SpaceThinker Space](https://img.shields.io/badge/🤗%20Spaces-Demo-blue)](https://huggingface.co/spaces/remyxai/SpaceThinker-Qwen2.5VL-3B) ### Ollama To launch with ollama, run: ```bash ollama run hf.co/remyxai/SpaceThinker-Qwen2.5VL-3B:latest ``` ### llama.cpp To run locally with **llama.cpp**, install and build this [branch](https://github.com/HimariO/llama.cpp.qwen2.5vl/tree/qwen25-vl) and download the [.gguf weights here](https://huggingface.co/remyxai/SpaceThinker-Qwen2.5VL-3B/tree/main/gguf) ```bash ./llama-qwen2vl-cli -m spacethinker-qwen2.5VL-3B-F16.gguf --mmproj spacethinker-qwen2.5vl-3b-vision.gguf --image images/example_1.jpg --threads 24 -ngl 9 -p "Does the man in blue shirt working have a greater \\ height compared to the wooden pallet with boxes on floor?" ``` Run using **llama.cpp in colab** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1_ShhJAqnac8L4N9o1YNdsxCksSLJCrU7?usp=sharing) ### Transformers Run locally using **Transformers** ```python import torch from PIL import Image from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor import requests from io import BytesIO # Configuration model_id = "remyxai/SpaceThinker-Qwen2.5VL-3B" image_path = "images/example_1.jpg" # or local path prompt = "What can you infer from this image about the environment?" system_message = ( "You are VL-Thinking 🤔, a helpful assistant with excellent reasoning ability. " "You should first think about the reasoning process and then provide the answer. " "Use <think>...</think> and <answer>...</answer> tags." ) # Load model and processor model = Qwen2_5_VLForConditionalGeneration.from_pretrained( model_id, device_map="auto", torch_dtype=torch.bfloat16 ) processor = AutoProcessor.from_pretrained(model_id) # Load and preprocess image if image_path.startswith("http"): image = Image.open(BytesIO(requests.get(image_path).content)).convert("RGB") else: image = Image.open(image_path).convert("RGB") if image.width > 512: ratio = image.height / image.width image = image.resize((512, int(512 * ratio)), Image.Resampling.LANCZOS) # Format input chat = [ {"role": "system", "content": [{"type": "text", "text": system_message}]}, {"role": "user", "content": [{"type": "image", "image": image}, {"type": "text", "text": prompt}]} ] text_input = processor.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) # Tokenize inputs = processor(text=[text_input], images=[image], return_tensors="pt").to("cuda") # Generate response generated_ids = model.generate(**inputs, max_new_tokens=1024) output = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] print("Response:\n", output) ``` ## SpaceThinker Dataset The **SpaceThinker** dataset includes over 12K samples synthesized using VQASynth on a subset of images in the localized narratives split of [the cauldron](https://huggingface.co/datasets/HuggingFaceM4/the_cauldron). **SpaceThinker** is formatted similar to the [Llama-Nemotron-Post-Training-Dataset-v1](https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset) to toggle reasoning. The model builds upon the ideas from [SpatialVLM (Chen et al., 2024)](https://spatial-vlm.github.io/), introducing synthetic reasoning traces grounded on a 3D scene reconstruction pipeline using **Molmo, VGGT, SAM2**. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/647777304ae93470ffc28913/Tkq_jIfD1YizSZBGdJY53.png) **Dataset Summary** - ~12K synthetic spatial reasoning traces - Question types: spatial relations (distances (units), above, left-of, contains, closest to) - Format: image (RGB) + question + answer with reasoning traces - Dataset: [remyxai/SpaceThinker](https://huggingface.co/datasets/remyxai/SpaceThinker) - Code: [Synthetize Spatial Reasoning Traces with VQASynth](https://github.com/remyxai/VQASynth) ## Training SpaceThinker **PEFT Configuration** - Architecture: Qwen2.5-VL-3B - Base model: UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-3B - Method: LoRA finetuning (PEFT) - LoRA Alpha: 256 - LoRA Rank: 128 - Target Modules: q_proj, v_proj - Optimizer: AdamW (lr=2e-5), batch size = 1, epochs = 3 - Max input length: 1024 tokens Reproduce LoRA SFT training with included script: ```bash python train.py ``` Wandb logs available [here](https://wandb.ai/smellslikeml/qwen2.5-3b-instruct-trl-sft-spacethinker). ## Model Evaluation NEW Spatial Reasoning Benchmark: **SpatialScore** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/647777304ae93470ffc28913/XMs9hntlWXJeO_sH7GZ-P.png) Evaluate **SpaceThinker** on the [SpatialScore](https://haoningwu3639.github.io/SpatialScore/) benchmarks for general spatial reasoning in the following colab notebook: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1eRc5_vpUCS4QxwzBNAvKi0z0IANdC9N2?usp=sharing) Or try distance estimation focusing on **Q-Spatial-Bench** in the colab notebook here: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1NH2n-PRJJOiu_md8agyYCnxEZDGO5ICJ?usp=sharing) The [Q-Spatial-Bench dataset](https://huggingface.co/datasets/andrewliao11/Q-Spatial-Bench) includes hundreds of VQA samples designed to evaluate quantitative spatial reasoning of VLMs with high-precision. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/647777304ae93470ffc28913/F2W6YkVdsNva0EpMGRtjh.png) Using the Colab notebook we evaluate **SpaceThinker** on the **QSpatial++** split under two conditions: - **Default System Prompt**: - Prompts completed: **93 / 101** - Correct answers: **30** - **Accuracy**: **32.26%** - **Prompting for step-by-step reasoning** using the [spatial prompt](https://github.com/andrewliao11/Q-Spatial-Bench-code/blob/main/prompt_templates/spatial_prompt_steps.txt) from **Q-Spatial-Bench**: - Correct answers: **53** - **Accuracy**: **52.48%** Using the spatial prompt improves the number of correct answers and overall accuracy rate while improving the task completion rate. Updating the comparison from **Q-Spatial-Bench** [project page](https://andrewliao11.github.io/spatial_prompt/), the **SpaceThinker-Qwen2.5-VL-3B** VLM using the SpatialPrompt for step-by-step reasoning performs on par with larger, closed, frontier API providers. ## QSpatial++ Comparison Table (4/25/25) | **Model** | **SpaceThinker-Qwen2.5VL-3B** | **gpt-4o** | **gemini-2.5-pro-preview-03-25** | |------------------------|----------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------| | **QSpatial++ Predictions** | <img src="https://cdn-uploads.huggingface.co/production/uploads/647777304ae93470ffc28913/W4b6fV0AxT6GsYR1XiQvA.png" alt="SpaceThinker sample" style="max-height: 150px;"> | <img src="https://cdn-uploads.huggingface.co/production/uploads/647777304ae93470ffc28913/j_NFQ9Lns8ON9Rzg3Fs0r.png" alt="gpt-4o sample" style="max-height: 150px;"> | <img src="https://cdn-uploads.huggingface.co/production/uploads/647777304ae93470ffc28913/Ot64jVvwdldpCuDr-6PjX.png" alt="Gemini sample" style="max-height: 150px;"> | | **Colab Notebook** | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1buEe2QC4_pnrJwQ9XyRAH7RfaIa6pbex?usp=sharing) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1zNv41ONUeoaEigscz9muZ3eVFtxev0qv?usp=sharing) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/153bbTxrRBH52n74jONvpsbqJ1XYNByKw?usp=sharing) | | **Success Rate (%) ↑** | **55** | 43 | 52 | | **Samples Completed ↑**| **99 / 100** | 95 / 100 | **99 / 100** | | **sMAPE (%) ↓** | 66 | 71 | **62** | ### Metric Notes - **Success Rate (%)**: Higher is better ↑ - **Samples Completed**: Higher is better ↑ - **sMAPE (%)**: Lower is better ↓ The following chart makes further comparisons to assess prompt sensitivity by evaluating w/o the benefit of the optimized step-by-step instructions. This comparison helps to quantify the effect of reasoning versus non-reasoning models as well as that of SFT by LoRA with synthetic spatial reasoning data. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/647777304ae93470ffc28913/H6GlJLkqIalHtMw5Ij4nJ.png) Consider the extended [comparisons here](https://huggingface.co/datasets/salma-remyx/Q-Spatial-Bench-sMAPE-Comparison) sweeping additional model sizes and architectures. ## Limitations - Performance may degrade in cluttered environments or camera perspective. - This model was fine-tuned using synthetic reasoning over an internet image dataset. - Multimodal biases inherent to the base model (Qwen2.5-VL) may persist. - Not intended for use in safety-critical or legal decision-making. > Users are encouraged to evaluate outputs critically and consider fine-tuning for domain-specific safety and performance. Distances estimated using autoregressive > transformers may help in higher-order reasoning for planning and behavior but may not be suitable replacements for measurements taken with high-precision sensors, > calibrated stereo vision systems, or specialist monocular depth estimation models capable of more accurate, pixel-wise predictions and real-time performance. ## Citation ``` @article{chen2024spatialvlm, title = {SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities}, author = {Chen, Boyuan and Xu, Zhuo and Kirmani, Sean and Ichter, Brian and Driess, Danny and Florence, Pete and Sadigh, Dorsa and Guibas, Leonidas and Xia, Fei}, journal = {arXiv preprint arXiv:2401.12168}, year = {2024}, url = {https://arxiv.org/abs/2401.12168}, } @misc{qwen2.5-VL, title = {Qwen2.5-VL}, url = {https://qwenlm.github.io/blog/qwen2.5-vl/}, author = {Qwen Team}, month = {January}, year = {2025} } @misc{vl-thinking2025, title={SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models }, author={Hardy Chen and Haoqin Tu and Fali Wang and Hui Liu and Xianfeng Tang and Xinya Du and Yuyin Zhou and Cihang Xie}, year = {2025}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/UCSC-VLAA/VLAA-Thinking}}, } @inproceedings{ liaos2024reasoning, title={Reasoning Paths with Reference Objects Elicit Quantitative Spatial Reasoning in Large Vision-Language Models}, author={Yuan-Hong Liao and Rafid Mahmood and Sanja Fidler and David Acuna}, booktitle={The 2024 Conference on Empirical Methods in Natural Language Processing}, year={2024}, url={https://arxiv.org/abs/2409.09788}, } ```
fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scavenging_freckled_macaque
fakeid
2025-05-30T22:18:58Z
13
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am scavenging freckled macaque", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-24T07:57:18Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scavenging_freckled_macaque tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am scavenging freckled macaque - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scavenging_freckled_macaque This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scavenging_freckled_macaque", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
davesnow1/Wan14B-Cyberpop-Lora
davesnow1
2025-05-30T22:18:39Z
0
0
null
[ "art", "license:apache-2.0", "region:us" ]
null
2025-05-30T20:34:36Z
--- license: apache-2.0 tags: - art --- CyberPop is a lora for Wan2.1 T2V 14B. I developed the art style using Flux, and John Dopeamine trained the lora. It is a vibrant, anime style lora with soft, dreamy backgrounds. Some example outputs are linked below. Trigger word is: cbrppstl You can find a link to John's Huggingface page here: https://huggingface.co/CCP6
mradermacher/GPT-Greentext-1.5b-GGUF
mradermacher
2025-05-30T22:17:20Z
0
0
transformers
[ "transformers", "gguf", "fun", "greentext", "en", "dataset:DarwinAnim8or/greentext", "base_model:DarwinAnim8or/GPT-Greentext-1.5b", "base_model:quantized:DarwinAnim8or/GPT-Greentext-1.5b", "license:mit", "endpoints_compatible", "region:us" ]
null
2025-05-30T22:07:22Z
--- base_model: DarwinAnim8or/GPT-Greentext-1.5b datasets: - DarwinAnim8or/greentext language: - en library_name: transformers license: mit quantized_by: mradermacher tags: - fun - greentext --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/DarwinAnim8or/GPT-Greentext-1.5b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/GPT-Greentext-1.5b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-GGUF/resolve/main/GPT-Greentext-1.5b.Q2_K.gguf) | Q2_K | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-GGUF/resolve/main/GPT-Greentext-1.5b.Q3_K_S.gguf) | Q3_K_S | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-GGUF/resolve/main/GPT-Greentext-1.5b.IQ4_XS.gguf) | IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-GGUF/resolve/main/GPT-Greentext-1.5b.Q3_K_M.gguf) | Q3_K_M | 1.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-GGUF/resolve/main/GPT-Greentext-1.5b.Q3_K_L.gguf) | Q3_K_L | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-GGUF/resolve/main/GPT-Greentext-1.5b.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-GGUF/resolve/main/GPT-Greentext-1.5b.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-GGUF/resolve/main/GPT-Greentext-1.5b.Q5_K_S.gguf) | Q5_K_S | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-GGUF/resolve/main/GPT-Greentext-1.5b.Q5_K_M.gguf) | Q5_K_M | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-GGUF/resolve/main/GPT-Greentext-1.5b.Q6_K.gguf) | Q6_K | 1.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-GGUF/resolve/main/GPT-Greentext-1.5b.Q8_0.gguf) | Q8_0 | 1.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/GPT-Greentext-1.5b-GGUF/resolve/main/GPT-Greentext-1.5b.f16.gguf) | f16 | 3.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->