modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-05-25 00:44:43
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
476 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-05-25 00:44:09
card
stringlengths
11
1.01M
whodisidk/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-durable_woolly_antelope
whodisidk
2025-05-24T23:35:00Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am durable woolly antelope", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-01T17:51:06Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-durable_woolly_antelope tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am durable woolly antelope - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-durable_woolly_antelope This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="whodisidk/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-durable_woolly_antelope", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
KaUzefa/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_miniature_lizard
KaUzefa
2025-05-24T23:34:32Z
17
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am mighty miniature lizard", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-17T12:09:38Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_miniature_lizard tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am mighty miniature lizard - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_miniature_lizard This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="KaUzefa/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_miniature_lizard", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Triangle104/Qwen3-30B-A1.5B-High-Speed-Q8_0-GGUF
Triangle104
2025-05-24T23:34:30Z
0
0
transformers
[ "transformers", "gguf", "32 k context", "reasoning", "thinking", "qwen3", "4 experts activated", "double speed", "128 experts", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:DavidAU/Qwen3-30B-A1.5B-High-Speed", "base_model:quantized:DavidAU/Qwen3-30B-A1.5B-High-Speed", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-24T23:29:17Z
--- library_name: transformers pipeline_tag: text-generation tags: - 32 k context - reasoning - thinking - qwen3 - 4 experts activated - double speed - 128 experts - llama-cpp - gguf-my-repo base_model: DavidAU/Qwen3-30B-A1.5B-High-Speed --- # Triangle104/Qwen3-30B-A1.5B-High-Speed-Q8_0-GGUF This model was converted to GGUF format from [`DavidAU/Qwen3-30B-A1.5B-High-Speed`](https://huggingface.co/DavidAU/Qwen3-30B-A1.5B-High-Speed) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/DavidAU/Qwen3-30B-A1.5B-High-Speed) for more details on the model. --- This is a simple "finetune" of the Qwen's "Qwen 30B-A3B" (MOE) model, setting the experts in use from 8 to 4 (out of 128 experts). This method close to doubles the speed of the model and uses 1.5B (of 30B) parameters instead of 3B (of 30B) parameters. Depending on the application you may want to use the regular model ("30B-A3B"), and use this model for simpler use case(s) although I did not notice any loss of function during routine (but not extensive) testing. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen3-30B-A1.5B-High-Speed-Q8_0-GGUF --hf-file qwen3-30b-a1.5b-high-speed-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen3-30B-A1.5B-High-Speed-Q8_0-GGUF --hf-file qwen3-30b-a1.5b-high-speed-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen3-30B-A1.5B-High-Speed-Q8_0-GGUF --hf-file qwen3-30b-a1.5b-high-speed-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen3-30B-A1.5B-High-Speed-Q8_0-GGUF --hf-file qwen3-30b-a1.5b-high-speed-q8_0.gguf -c 2048 ```
rudra-sol/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mottled_beaked_jaguar
rudra-sol
2025-05-24T23:33:50Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am mottled beaked jaguar", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-02T06:50:49Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mottled_beaked_jaguar tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am mottled beaked jaguar - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mottled_beaked_jaguar This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="rudra-sol/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mottled_beaked_jaguar", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/palmyra-small-GGUF
mradermacher
2025-05-24T23:33:43Z
0
0
transformers
[ "transformers", "gguf", "text generation", "pytorch", "causal-lm", "Writer-data", "NeMo", "palmyra", "en", "dataset:English", "base_model:Writer/palmyra-small", "base_model:quantized:Writer/palmyra-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-24T23:28:41Z
--- base_model: Writer/palmyra-small datasets: - English language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text generation - pytorch - causal-lm - Writer-data - NeMo - palmyra --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Writer/palmyra-small <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/palmyra-small-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/palmyra-small-GGUF/resolve/main/palmyra-small.Q2_K.gguf) | Q2_K | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/palmyra-small-GGUF/resolve/main/palmyra-small.Q3_K_S.gguf) | Q3_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/palmyra-small-GGUF/resolve/main/palmyra-small.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/palmyra-small-GGUF/resolve/main/palmyra-small.IQ4_XS.gguf) | IQ4_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/palmyra-small-GGUF/resolve/main/palmyra-small.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/palmyra-small-GGUF/resolve/main/palmyra-small.Q3_K_L.gguf) | Q3_K_L | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/palmyra-small-GGUF/resolve/main/palmyra-small.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/palmyra-small-GGUF/resolve/main/palmyra-small.Q5_K_S.gguf) | Q5_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/palmyra-small-GGUF/resolve/main/palmyra-small.Q5_K_M.gguf) | Q5_K_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/palmyra-small-GGUF/resolve/main/palmyra-small.Q6_K.gguf) | Q6_K | 0.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/palmyra-small-GGUF/resolve/main/palmyra-small.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/palmyra-small-GGUF/resolve/main/palmyra-small.f16.gguf) | f16 | 0.4 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
kmpartner/bkv2tpcmlr2-test
kmpartner
2025-05-24T23:33:07Z
6
0
peft
[ "peft", "tensorboard", "diffusers", "safetensors", "arxiv:1910.09700", "base_model:nota-ai/bk-sdm-v2-tiny", "base_model:adapter:nota-ai/bk-sdm-v2-tiny", "region:us" ]
null
2025-04-08T12:30:33Z
--- base_model: nota-ai/bk-sdm-v2-tiny library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
cryptolemon/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-powerful_feline_bat
cryptolemon
2025-05-24T23:32:29Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am powerful feline bat", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T15:32:53Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-powerful_feline_bat tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am powerful feline bat - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-powerful_feline_bat This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="cryptolemon/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-powerful_feline_bat", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
hayashizawa/gensyn-checkpoints-grazing_pouncing_crow
hayashizawa
2025-05-24T23:32:20Z
8
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am grazing pouncing crow", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-17T02:01:02Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: gensyn-checkpoints-grazing_pouncing_crow tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am grazing pouncing crow - unsloth - trl licence: license --- # Model Card for gensyn-checkpoints-grazing_pouncing_crow This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="hayashizawa/gensyn-checkpoints-grazing_pouncing_crow", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
cryptolemon/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-playful_shiny_fish
cryptolemon
2025-05-24T23:31:46Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am playful shiny fish", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-03T08:52:55Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-playful_shiny_fish tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am playful shiny fish - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-playful_shiny_fish This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="cryptolemon/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-playful_shiny_fish", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
infoipman/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tall_mammalian_caribou
infoipman
2025-05-24T23:31:41Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am tall mammalian caribou", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-02T15:18:14Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tall_mammalian_caribou tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am tall mammalian caribou - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tall_mammalian_caribou This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="infoipman/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tall_mammalian_caribou", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1+cu124 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Krust081/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-elusive_territorial_chinchilla
Krust081
2025-05-24T23:31:41Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am elusive territorial chinchilla", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-13T16:04:03Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-elusive_territorial_chinchilla tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am elusive territorial chinchilla - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-elusive_territorial_chinchilla This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Krust081/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-elusive_territorial_chinchilla", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Bingham/qwen_2_5_7b_grpo_train_unsloth_model
Bingham
2025-05-24T23:30:45Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-09T00:43:33Z
--- base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Bingham - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
MomlessTomato/eli-ayase
MomlessTomato
2025-05-24T23:30:38Z
2
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:cagliostrolab/animagine-xl-3.0", "base_model:adapter:cagliostrolab/animagine-xl-3.0", "license:mit", "region:us" ]
text-to-image
2024-02-10T04:18:54Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: >- masterpiece, high quality, defined pupil, looking at viewer, rounded pupil, defined iris, (soft iris:1.2), parameters: negative_prompt: >- bad_anatomy, deformation, amputation, deformity, deformed_nipples, duplicated_torso, deformed_torso, long_torso, large_torso, unproportioned_torso, (deformed_pussy:1.2), (deformed_hands:1.2), unproportioned_eyes, unproportioned_head, small_head, duplicated_nose, big_nose, fusioned_clothes, fusioned_arms, undefined_limbs, divided_pussy, red_pussy, duplicated_pussy, deformed_anus, deformed_pussy, output: url: images/eli_portrait.png base_model: cagliostrolab/animagine-xl-3.0 instance_prompt: id_eli_ayase license: mit --- # Eli Ayase <Gallery /> ## Model description This model was trained to generate high quality images based on SIFAS cards. To achieve better quality, you should be using hako-mikan&#39;s regional prompter, along with Latent Mode, which modifies the way Stable Diffusion isolates the LoRA resulting in a significant improvement. ## Trigger words You should use `id_eli_ayase` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/theidoldaily/eli-ayase/tree/main) them in the Files & versions tab.
katarinaaaaa/Vikhr-Customer-Service-Evaluation-2
katarinaaaaa
2025-05-24T23:30:15Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24", "base_model:finetune:Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-24T23:16:14Z
--- base_model: Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24 tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** katarinaaaaa - **License:** apache-2.0 - **Finetuned from model :** Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24 This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
phazei/phazei-SkyReels-V2-fp8-e5m2
phazei
2025-05-24T23:29:06Z
0
0
null
[ "skywork", "skyreels", "text-to-video", "video-generation", "fp8", "e5m2", "quantized", "14b", "540p", "comfyui", "base_model:Skywork/SkyReels-V2-DF-14B-540P", "base_model:finetune:Skywork/SkyReels-V2-DF-14B-540P", "license:apache-2.0", "region:us" ]
text-to-video
2025-05-24T20:46:53Z
--- license: apache-2.0 tags: - skywork - skyreels - text-to-video - video-generation - fp8 - e5m2 - quantized - 14b - 540p - comfyui # Add more relevant tags base_model: - Skywork/SkyReels-V2-DF-14B-540P - Skywork/SkyReels-V2-T2V-14B-540P --- # SkyReels-V2-14B-540P FP8-E5M2 Quantized Models This repository contains FP8-E5M2 quantized versions of the Skywork SkyReels-V2 14B 540P models, suitable for use with hardware supporting this precision (e.g., NVIDIA RTX 3090/40-series with `torch.compile`) and popular workflows like those in ComfyUI. These models were quantized by [phazei](https://huggingface.co/phazei). ## Original Models These quantized models are based on the following original FP32 models from Skywork: * **DF Variant:** [Skywork/SkyReels-V2-DF-14B-540P](https://huggingface.co/Skywork/SkyReels-V2-DF-14B-540P) * **T2V Variant:** [Skywork/SkyReels-V2-T2V-14B-540P](https://huggingface.co/Skywork/SkyReels-V2-T2V-14B-540P) Please refer to the original model cards for details on their architecture, training, and intended use cases. ## Quantization Details & Acknowledgements The models were converted from their original FP32 sharded format to a mixed-precision format. The specific layers quantized to `FP8-E5M2` (primarily weight layers within attention and FFN blocks, while biases and normalization layers were kept in FP32) were identified by analyzing the FP8 quantized models provided by **[Kijai](https://huggingface.co/Kijai)** from his repository **[Kijai/WanVideo_comfy](https://huggingface.co/Kijai/WanVideo_comfy)**. This conversion process replicates the quantization pattern observed in Kijai's converted files to produce these `FP8-E5M2` variants. Many thanks to Kijai for sharing his quantized models, which served as a clear reference for this work and benefit the ComfyUI community. The conversion was performed using PyTorch and `safetensors`. The scripts used for downloading the original models and performing this conversion are included in the `scripts/` directory of this repository. **Key characteristics of the quantized models:** * **Precision:** Mixed (FP32, FP8-E5M2, U8 for metadata) * **Target FP8 type:** `torch.float8_e5m2` * **Compatibility:** Intended for use with PyTorch versions supporting `torch.float8_e5m2` and `torch.compile`. Well-suited for ComfyUI workflows that can leverage these models. ## Files in this Repository * `SkyReels-V2-DF-14B-540P-fp8e5m2.safetensors`: The quantized DF variant (single file). * `SkyReels-V2-T2V-14B-540P-fp8e5m2.safetensors`: The quantized T2V variant (single file). * `scripts/`: Contains Python scripts for downloading original models and performing the quantization. * `model_download.py` * `convert_to_fp8e5m2.py` * `safetensors_info.py` * `README.md`: This model card. ## Disclaimer This is a community-contributed quantization. While efforts were made to maintain model quality by following an established quantization pattern, performance may differ from the original FP32 models or other quantized versions. Use at your own discretion. ## Acknowledgements * **Skywork AI** for releasing the original SkyReels models. * **[Kijai](https://huggingface.co/Kijai)** for providing the quantized model versions that served as a reference for the quantization pattern applied in this repository.
ethduke/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-bipedal_burrowing_albatross
ethduke
2025-05-24T23:28:57Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am bipedal burrowing albatross", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-24T21:09:42Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-bipedal_burrowing_albatross tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am bipedal burrowing albatross - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-bipedal_burrowing_albatross This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ethduke/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-bipedal_burrowing_albatross", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
0xdogacan/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-webbed_bellowing_trout
0xdogacan
2025-05-24T23:28:18Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am webbed bellowing trout", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-24T17:51:52Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-webbed_bellowing_trout tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am webbed bellowing trout - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-webbed_bellowing_trout This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="0xdogacan/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-webbed_bellowing_trout", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ataj1192/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mottled_untamed_wasp
ataj1192
2025-05-24T23:27:43Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am mottled untamed wasp", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-13T07:23:37Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mottled_untamed_wasp tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am mottled untamed wasp - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mottled_untamed_wasp This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ataj1192/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mottled_untamed_wasp", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
hungnm10/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-invisible_placid_buffalo
hungnm10
2025-05-24T23:27:28Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am invisible placid buffalo", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-23T17:54:04Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-invisible_placid_buffalo tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am invisible placid buffalo - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-invisible_placid_buffalo This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="hungnm10/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-invisible_placid_buffalo", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_alert_coyote
chinna6
2025-05-24T23:27:17Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am bold alert coyote", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-15T00:24:41Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_alert_coyote tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am bold alert coyote - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_alert_coyote This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_alert_coyote", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
kayacrypto/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mute_tall_zebra
kayacrypto
2025-05-24T23:27:07Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am mute tall zebra", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T12:12:42Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mute_tall_zebra tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am mute tall zebra - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mute_tall_zebra This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="kayacrypto/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mute_tall_zebra", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
cryptolemon/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mangy_stocky_aardvark
cryptolemon
2025-05-24T23:26:44Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am mangy stocky aardvark", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-01T21:28:56Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mangy_stocky_aardvark tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am mangy stocky aardvark - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mangy_stocky_aardvark This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="cryptolemon/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mangy_stocky_aardvark", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
posb/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grazing_stealthy_chicken
posb
2025-05-24T23:26:34Z
8
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am grazing stealthy chicken", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-28T07:11:07Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grazing_stealthy_chicken tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am grazing stealthy chicken - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grazing_stealthy_chicken This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="posb/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grazing_stealthy_chicken", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
spitmk4/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-swift_slender_goat
spitmk4
2025-05-24T23:26:30Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am swift slender goat", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-01T12:28:36Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-swift_slender_goat tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am swift slender goat - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-swift_slender_goat This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="spitmk4/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-swift_slender_goat", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
amiguel/class_insp_program
amiguel
2025-05-24T23:26:00Z
0
0
null
[ "safetensors", "bert", "license:apache-2.0", "region:us" ]
null
2025-05-24T23:10:04Z
--- license: apache-2.0 ---
romero-p/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope
romero-p
2025-05-24T23:25:13Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am lumbering grazing antelope", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-04-30T20:51:32Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am lumbering grazing antelope - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="romero-p/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
silverbenehi/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_running_kangaroo
silverbenehi
2025-05-24T23:24:30Z
9
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am bold running kangaroo", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-09T21:11:49Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_running_kangaroo tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am bold running kangaroo - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_running_kangaroo This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="silverbenehi/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_running_kangaroo", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Tiba/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-aquatic_waddling_raccoon
Tiba
2025-05-24T23:24:24Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am aquatic waddling raccoon", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-20T16:07:17Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-aquatic_waddling_raccoon tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am aquatic waddling raccoon - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-aquatic_waddling_raccoon This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Tiba/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-aquatic_waddling_raccoon", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
aaronlee18/distilroberta-base-finetuned-wikitext2
aaronlee18
2025-05-24T23:23:25Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "base_model:distilbert/distilroberta-base", "base_model:finetune:distilbert/distilroberta-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-05-24T22:43:23Z
--- library_name: transformers license: apache-2.0 base_model: distilroberta-base tags: - generated_from_trainer model-index: - name: distilroberta-base-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8599 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0844 | 1.0 | 2406 | 1.9402 | | 1.9835 | 2.0 | 4812 | 1.8854 | | 1.951 | 3.0 | 7218 | 1.8353 | ### Framework versions - Transformers 4.52.1 - Pytorch 2.7.0 - Datasets 3.6.0 - Tokenizers 0.21.1
BLIP3o/BLIP3o-Model-4B
BLIP3o
2025-05-24T23:22:32Z
559
6
diffusers
[ "diffusers", "safetensors", "llava_qwen", "en", "license:apache-2.0", "region:us" ]
null
2025-05-20T00:37:03Z
--- language: - en license: apache-2.0 --- This is BLIP3o-4B checkpoint trained on the **open source** data. | Model | Pretrain Data | GenEval | DBP | WISE | |---------------------|-----------------------------------------------------------|---------|--------|------| | 4B (open source) | 30 million open-source data | 0.81 | 79.36 | 0.50 | | 8B (open source) | 30 million open-source data | 0.83 | 80.73 | 0.52 | | 8B (paper reported) | 30 million open-source + 30 million proprietary data | 0.84 | 81.60 | 0.62 |
p2g6gensyn/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_yapping_clam
p2g6gensyn
2025-05-24T23:21:15Z
1
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am dappled yapping clam", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-20T15:37:10Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_yapping_clam tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am dappled yapping clam - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_yapping_clam This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="p2g6gensyn/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_yapping_clam", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-meek_tawny_octopus
chinna6
2025-05-24T23:20:06Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am meek tawny octopus", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-14T19:31:09Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-meek_tawny_octopus tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am meek tawny octopus - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-meek_tawny_octopus This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-meek_tawny_octopus", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Triangle104/Qwen3-30B-A1.5B-High-Speed-Q5_K_M-GGUF
Triangle104
2025-05-24T23:20:03Z
0
0
transformers
[ "transformers", "gguf", "32 k context", "reasoning", "thinking", "qwen3", "4 experts activated", "double speed", "128 experts", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:DavidAU/Qwen3-30B-A1.5B-High-Speed", "base_model:quantized:DavidAU/Qwen3-30B-A1.5B-High-Speed", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-24T22:34:47Z
--- library_name: transformers pipeline_tag: text-generation tags: - 32 k context - reasoning - thinking - qwen3 - 4 experts activated - double speed - 128 experts - llama-cpp - gguf-my-repo base_model: DavidAU/Qwen3-30B-A1.5B-High-Speed --- # Triangle104/Qwen3-30B-A1.5B-High-Speed-Q5_K_M-GGUF This model was converted to GGUF format from [`DavidAU/Qwen3-30B-A1.5B-High-Speed`](https://huggingface.co/DavidAU/Qwen3-30B-A1.5B-High-Speed) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/DavidAU/Qwen3-30B-A1.5B-High-Speed) for more details on the model. --- This is a simple "finetune" of the Qwen's "Qwen 30B-A3B" (MOE) model, setting the experts in use from 8 to 4 (out of 128 experts). This method close to doubles the speed of the model and uses 1.5B (of 30B) parameters instead of 3B (of 30B) parameters. Depending on the application you may want to use the regular model ("30B-A3B"), and use this model for simpler use case(s) although I did not notice any loss of function during routine (but not extensive) testing. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen3-30B-A1.5B-High-Speed-Q5_K_M-GGUF --hf-file qwen3-30b-a1.5b-high-speed-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen3-30B-A1.5B-High-Speed-Q5_K_M-GGUF --hf-file qwen3-30b-a1.5b-high-speed-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen3-30B-A1.5B-High-Speed-Q5_K_M-GGUF --hf-file qwen3-30b-a1.5b-high-speed-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen3-30B-A1.5B-High-Speed-Q5_K_M-GGUF --hf-file qwen3-30b-a1.5b-high-speed-q5_k_m.gguf -c 2048 ```
web34ever/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-yawning_giant_newt
web34ever
2025-05-24T23:19:29Z
3
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am yawning giant newt", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T18:33:22Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-yawning_giant_newt tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am yawning giant newt - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-yawning_giant_newt This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="web34ever/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-yawning_giant_newt", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/DialoGPT-medium-sheldon-GGUF
mradermacher
2025-05-24T23:19:05Z
0
0
transformers
[ "transformers", "gguf", "conversational", "en", "base_model:Spirax/DialoGPT-medium-sheldon", "base_model:quantized:Spirax/DialoGPT-medium-sheldon", "license:mit", "endpoints_compatible", "region:us" ]
null
2025-05-24T23:14:23Z
--- base_model: Spirax/DialoGPT-medium-sheldon language: - en library_name: transformers license: mit quantized_by: mradermacher tags: - conversational --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Spirax/DialoGPT-medium-sheldon <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/DialoGPT-medium-sheldon-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-sheldon-GGUF/resolve/main/DialoGPT-medium-sheldon.Q2_K.gguf) | Q2_K | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-sheldon-GGUF/resolve/main/DialoGPT-medium-sheldon.Q3_K_S.gguf) | Q3_K_S | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-sheldon-GGUF/resolve/main/DialoGPT-medium-sheldon.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-sheldon-GGUF/resolve/main/DialoGPT-medium-sheldon.IQ4_XS.gguf) | IQ4_XS | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-sheldon-GGUF/resolve/main/DialoGPT-medium-sheldon.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-sheldon-GGUF/resolve/main/DialoGPT-medium-sheldon.Q3_K_L.gguf) | Q3_K_L | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-sheldon-GGUF/resolve/main/DialoGPT-medium-sheldon.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-sheldon-GGUF/resolve/main/DialoGPT-medium-sheldon.Q5_K_S.gguf) | Q5_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-sheldon-GGUF/resolve/main/DialoGPT-medium-sheldon.Q5_K_M.gguf) | Q5_K_M | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-sheldon-GGUF/resolve/main/DialoGPT-medium-sheldon.Q6_K.gguf) | Q6_K | 0.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-sheldon-GGUF/resolve/main/DialoGPT-medium-sheldon.Q8_0.gguf) | Q8_0 | 0.5 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-sheldon-GGUF/resolve/main/DialoGPT-medium-sheldon.f16.gguf) | f16 | 0.8 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Uknownkin/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wiry_mimic_seahorse
Uknownkin
2025-05-24T23:18:44Z
14
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am wiry mimic seahorse", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T22:07:11Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wiry_mimic_seahorse tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am wiry mimic seahorse - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wiry_mimic_seahorse This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Uknownkin/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wiry_mimic_seahorse", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
starburned/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scurrying_ravenous_chinchilla
starburned
2025-05-24T23:18:30Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am scurrying ravenous chinchilla", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-02T09:55:02Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scurrying_ravenous_chinchilla tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am scurrying ravenous chinchilla - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scurrying_ravenous_chinchilla This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="starburned/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scurrying_ravenous_chinchilla", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
WATCH-18-Katrina-Lim-Kiffy-Viral-Video/Full.Clip.Katrina.Lim.Viral.Video.Leaks.Official
WATCH-18-Katrina-Lim-Kiffy-Viral-Video
2025-05-24T23:18:30Z
0
0
null
[ "region:us" ]
null
2025-05-24T23:18:12Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Soughing/mlra_alpha_2.0_beta_1.0_xl
Soughing
2025-05-24T23:17:21Z
2
0
null
[ "pytorch", "gpt2", "license:apache-2.0", "region:us" ]
null
2025-05-23T18:18:57Z
--- license: apache-2.0 ---
hazentr/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-roaring_colorful_buffalo
hazentr
2025-05-24T23:16:33Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am roaring colorful buffalo", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-01T01:19:33Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-roaring_colorful_buffalo tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am roaring colorful buffalo - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-roaring_colorful_buffalo This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="hazentr/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-roaring_colorful_buffalo", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
hazentr/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-slender_grunting_koala
hazentr
2025-05-24T23:16:27Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am slender grunting koala", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-01T01:11:20Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-slender_grunting_koala tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am slender grunting koala - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-slender_grunting_koala This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="hazentr/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-slender_grunting_koala", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
the-acorn-ai/Qwen3-4B-Base-4K-KuhnPoker-Mistral-Role-0524-Simon_step_00288
the-acorn-ai
2025-05-24T23:16:09Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-24T23:14:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/gpt2-finetuned-recipes-cooking-i1-GGUF
mradermacher
2025-05-24T23:15:00Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:mrm8488/gpt2-finetuned-recipes-cooking", "base_model:quantized:mrm8488/gpt2-finetuned-recipes-cooking", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-05-24T23:10:26Z
--- base_model: mrm8488/gpt2-finetuned-recipes-cooking language: en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/mrm8488/gpt2-finetuned-recipes-cooking <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking-i1-GGUF/resolve/main/gpt2-finetuned-recipes-cooking.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking-i1-GGUF/resolve/main/gpt2-finetuned-recipes-cooking.i1-IQ1_M.gguf) | i1-IQ1_M | 0.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking-i1-GGUF/resolve/main/gpt2-finetuned-recipes-cooking.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking-i1-GGUF/resolve/main/gpt2-finetuned-recipes-cooking.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking-i1-GGUF/resolve/main/gpt2-finetuned-recipes-cooking.i1-IQ2_S.gguf) | i1-IQ2_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking-i1-GGUF/resolve/main/gpt2-finetuned-recipes-cooking.i1-IQ2_M.gguf) | i1-IQ2_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking-i1-GGUF/resolve/main/gpt2-finetuned-recipes-cooking.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking-i1-GGUF/resolve/main/gpt2-finetuned-recipes-cooking.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.2 | very low quality | | [GGUF](https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking-i1-GGUF/resolve/main/gpt2-finetuned-recipes-cooking.i1-Q2_K.gguf) | i1-Q2_K | 0.2 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking-i1-GGUF/resolve/main/gpt2-finetuned-recipes-cooking.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking-i1-GGUF/resolve/main/gpt2-finetuned-recipes-cooking.i1-IQ3_S.gguf) | i1-IQ3_S | 0.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking-i1-GGUF/resolve/main/gpt2-finetuned-recipes-cooking.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.2 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking-i1-GGUF/resolve/main/gpt2-finetuned-recipes-cooking.i1-IQ3_M.gguf) | i1-IQ3_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking-i1-GGUF/resolve/main/gpt2-finetuned-recipes-cooking.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking-i1-GGUF/resolve/main/gpt2-finetuned-recipes-cooking.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking-i1-GGUF/resolve/main/gpt2-finetuned-recipes-cooking.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.2 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking-i1-GGUF/resolve/main/gpt2-finetuned-recipes-cooking.i1-Q4_0.gguf) | i1-Q4_0 | 0.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking-i1-GGUF/resolve/main/gpt2-finetuned-recipes-cooking.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking-i1-GGUF/resolve/main/gpt2-finetuned-recipes-cooking.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking-i1-GGUF/resolve/main/gpt2-finetuned-recipes-cooking.i1-Q4_1.gguf) | i1-Q4_1 | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking-i1-GGUF/resolve/main/gpt2-finetuned-recipes-cooking.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking-i1-GGUF/resolve/main/gpt2-finetuned-recipes-cooking.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking-i1-GGUF/resolve/main/gpt2-finetuned-recipes-cooking.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking-i1-GGUF/resolve/main/gpt2-finetuned-recipes-cooking.i1-Q6_K.gguf) | i1-Q6_K | 0.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_rapid_beaver
chinna6
2025-05-24T23:14:37Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am coiled rapid beaver", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-14T19:27:00Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_rapid_beaver tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am coiled rapid beaver - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_rapid_beaver This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_rapid_beaver", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/gpt-nyc-nontoxic-i1-GGUF
mradermacher
2025-05-24T23:14:25Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:monsoon-nlp/gpt-nyc-nontoxic", "base_model:quantized:monsoon-nlp/gpt-nyc-nontoxic", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-05-24T23:02:41Z
--- base_model: monsoon-nlp/gpt-nyc-nontoxic language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/monsoon-nlp/gpt-nyc-nontoxic <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/gpt-nyc-nontoxic-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-nontoxic-i1-GGUF/resolve/main/gpt-nyc-nontoxic.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-nontoxic-i1-GGUF/resolve/main/gpt-nyc-nontoxic.i1-IQ1_M.gguf) | i1-IQ1_M | 0.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-nontoxic-i1-GGUF/resolve/main/gpt-nyc-nontoxic.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-nontoxic-i1-GGUF/resolve/main/gpt-nyc-nontoxic.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-nontoxic-i1-GGUF/resolve/main/gpt-nyc-nontoxic.i1-IQ2_S.gguf) | i1-IQ2_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-nontoxic-i1-GGUF/resolve/main/gpt-nyc-nontoxic.i1-IQ2_M.gguf) | i1-IQ2_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-nontoxic-i1-GGUF/resolve/main/gpt-nyc-nontoxic.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-nontoxic-i1-GGUF/resolve/main/gpt-nyc-nontoxic.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.2 | very low quality | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-nontoxic-i1-GGUF/resolve/main/gpt-nyc-nontoxic.i1-Q2_K.gguf) | i1-Q2_K | 0.2 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-nontoxic-i1-GGUF/resolve/main/gpt-nyc-nontoxic.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-nontoxic-i1-GGUF/resolve/main/gpt-nyc-nontoxic.i1-IQ3_S.gguf) | i1-IQ3_S | 0.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-nontoxic-i1-GGUF/resolve/main/gpt-nyc-nontoxic.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.2 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-nontoxic-i1-GGUF/resolve/main/gpt-nyc-nontoxic.i1-IQ3_M.gguf) | i1-IQ3_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-nontoxic-i1-GGUF/resolve/main/gpt-nyc-nontoxic.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-nontoxic-i1-GGUF/resolve/main/gpt-nyc-nontoxic.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-nontoxic-i1-GGUF/resolve/main/gpt-nyc-nontoxic.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.2 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-nontoxic-i1-GGUF/resolve/main/gpt-nyc-nontoxic.i1-Q4_0.gguf) | i1-Q4_0 | 0.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-nontoxic-i1-GGUF/resolve/main/gpt-nyc-nontoxic.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-nontoxic-i1-GGUF/resolve/main/gpt-nyc-nontoxic.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-nontoxic-i1-GGUF/resolve/main/gpt-nyc-nontoxic.i1-Q4_1.gguf) | i1-Q4_1 | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-nontoxic-i1-GGUF/resolve/main/gpt-nyc-nontoxic.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-nontoxic-i1-GGUF/resolve/main/gpt-nyc-nontoxic.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-nontoxic-i1-GGUF/resolve/main/gpt-nyc-nontoxic.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-nontoxic-i1-GGUF/resolve/main/gpt-nyc-nontoxic.i1-Q6_K.gguf) | i1-Q6_K | 0.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/tiny-gpt2-magicprompt-GGUF
mradermacher
2025-05-24T23:14:21Z
0
0
null
[ "region:us" ]
null
2025-05-24T23:14:19Z
<!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/pszemraj/tiny-gpt2-magicprompt
g-assismoraes/gemma-3-4b-it-fpi-alpha1.0-50e-var-tiebe
g-assismoraes
2025-05-24T23:14:11Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-05-24T23:10:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Miskovich/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-extinct_chattering_dragonfly
Miskovich
2025-05-24T23:13:59Z
17
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am extinct chattering dragonfly", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T22:52:29Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-extinct_chattering_dragonfly tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am extinct chattering dragonfly - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-extinct_chattering_dragonfly This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Miskovich/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-extinct_chattering_dragonfly", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
warmachine68/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_feline_mule
warmachine68
2025-05-24T23:13:45Z
22
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am nasty feline mule", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-23T19:48:44Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_feline_mule tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am nasty feline mule - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_feline_mule This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="warmachine68/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_feline_mule", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
the-acorn-ai/Qwen3-4B-Base-4K-KuhnPoker-Mistral-Role-0524-Simon_step_00224
the-acorn-ai
2025-05-24T23:12:11Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-24T23:10:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-omnivorous_wary_komodo
chinna6
2025-05-24T23:12:03Z
13
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am omnivorous wary komodo", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-20T11:05:11Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-omnivorous_wary_komodo tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am omnivorous wary komodo - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-omnivorous_wary_komodo This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-omnivorous_wary_komodo", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
numnum1/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-reclusive_mangy_zebra
numnum1
2025-05-24T23:11:29Z
9
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am reclusive mangy zebra", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-28T10:37:38Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-reclusive_mangy_zebra tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am reclusive mangy zebra - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-reclusive_mangy_zebra This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="numnum1/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-reclusive_mangy_zebra", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF
mradermacher
2025-05-24T23:10:18Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "en", "dataset:common_gen", "base_model:mrm8488/bloom-560m-finetuned-common_gen", "base_model:quantized:mrm8488/bloom-560m-finetuned-common_gen", "license:bigscience-bloom-rail-1.0", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-05-24T22:53:17Z
--- base_model: mrm8488/bloom-560m-finetuned-common_gen datasets: - common_gen language: - en library_name: transformers license: bigscience-bloom-rail-1.0 quantized_by: mradermacher tags: - generated_from_trainer --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/mrm8488/bloom-560m-finetuned-common_gen <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF/resolve/main/bloom-560m-finetuned-common_gen.i1-IQ1_S.gguf) | i1-IQ1_S | 0.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF/resolve/main/bloom-560m-finetuned-common_gen.i1-IQ1_M.gguf) | i1-IQ1_M | 0.4 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF/resolve/main/bloom-560m-finetuned-common_gen.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF/resolve/main/bloom-560m-finetuned-common_gen.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF/resolve/main/bloom-560m-finetuned-common_gen.i1-IQ2_S.gguf) | i1-IQ2_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF/resolve/main/bloom-560m-finetuned-common_gen.i1-IQ2_M.gguf) | i1-IQ2_M | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF/resolve/main/bloom-560m-finetuned-common_gen.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF/resolve/main/bloom-560m-finetuned-common_gen.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.4 | very low quality | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF/resolve/main/bloom-560m-finetuned-common_gen.i1-Q2_K.gguf) | i1-Q2_K | 0.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF/resolve/main/bloom-560m-finetuned-common_gen.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF/resolve/main/bloom-560m-finetuned-common_gen.i1-IQ3_S.gguf) | i1-IQ3_S | 0.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF/resolve/main/bloom-560m-finetuned-common_gen.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.5 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF/resolve/main/bloom-560m-finetuned-common_gen.i1-IQ3_M.gguf) | i1-IQ3_M | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF/resolve/main/bloom-560m-finetuned-common_gen.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.5 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF/resolve/main/bloom-560m-finetuned-common_gen.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF/resolve/main/bloom-560m-finetuned-common_gen.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.5 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF/resolve/main/bloom-560m-finetuned-common_gen.i1-Q4_0.gguf) | i1-Q4_0 | 0.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF/resolve/main/bloom-560m-finetuned-common_gen.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.5 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF/resolve/main/bloom-560m-finetuned-common_gen.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.5 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF/resolve/main/bloom-560m-finetuned-common_gen.i1-Q4_1.gguf) | i1-Q4_1 | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF/resolve/main/bloom-560m-finetuned-common_gen.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF/resolve/main/bloom-560m-finetuned-common_gen.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF/resolve/main/bloom-560m-finetuned-common_gen.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF/resolve/main/bloom-560m-finetuned-common_gen.i1-Q6_K.gguf) | i1-Q6_K | 0.6 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Remade-AI/Crash-zoom-out
Remade-AI
2025-05-24T23:09:59Z
0
1
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "image-to-video", "en", "base_model:Wan-AI/Wan2.1-I2V-14B-480P", "base_model:adapter:Wan-AI/Wan2.1-I2V-14B-480P", "license:apache-2.0", "region:us" ]
text-to-image
2025-05-24T22:58:45Z
--- license: apache-2.0 language: - en base_model: - Wan-AI/Wan2.1-I2V-14B-480P - Wan-AI/Wan2.1-I2V-14B-480P-Diffusers pipeloute_tag: image-to-video tags: - text-to-image - lora - diffusers - template:diffusion-lora - image-to-video widget: - text: >- The video begins with a close-up on a man's face, his hands tied with rope, and an anxious expression. Then, a cr34sh crash zoom out effect reveals a dark and obscure room, the man is still tied up and two men wearing balaklavas and holding guns appear to be standing behind him. output: url: example_videos/1.mp4 - text: >- The video begins with a close-up on the man's face, with ice covering his beard and eyelashes. He has a concerned or startled expression, his eyes are a vivid blue. A cr34sh crash zoom out effect rapidly pulls the camera back, revealing the man in a yellow jacket set in a icy landscape. The cr34sh crash zoom out effect shows his position: standing on the edge of the sea with icebergs in the background. output: url: example_videos/2.mp4 - text: >- The video begins with a close-up shot of a woman's face with intricate black and white tribal markings on her face, neck, and chest. Her eyes are closed and she is wearing dark red eyeshadow and lipstick. The cr34sh crash zoom out effect then begins, quickly pulling back to reveal that the woman is in a dimly lit room, with candles all around her. output: url: example_videos/3.mp4 --- <div style="background-color: #f8f9fa; padding: 20px; border-radius: 10px; margin-bottom: 20px;"> <h1 style="color: #24292e; margin-top: 0;">Crash zoom out LoRA for Wan2.1 14B I2V 480p</h1> <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);"> <h2 style="color: #24292e; margin-top: 0;">Overview</h2> <p>Abruptly zooms out from the subject to reveal the surrounding scene, creating a sudden sense of scale, surprise, or disorientation. Ideal for dramatic or comedic reveals.This LoRA is trained on the Wan2.1 14B I2V 480p model. </p> </div> <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);"> <h2 style="color: #24292e; margin-top: 0;">Features</h2> <ul style="margin-bottom: 0;"> <li>Trained on the Wan2.1 14B 480p I2V base model</li> <li>Consistent results across different object types</li> <li>Simple prompt structure that's easy to adapt</li> </ul> </div> <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);"> <h2 style="color: #24292e; margin-top: 0;">Community</h2> <ul style="margin-bottom: 0;"> <li> Generate videos with 100+ Camera Control and VFX LoRAs on the <a href="https://app.remade.ai/canvas/create" style="color: #0366d6; text-decoration: none;">Remade Canvas</a>. </li> <li> <b>Discord:</b> <a href="https://remade.ai/join-discord?utm_source=Huggingface&utm_medium=Social&utm_campaign=model_release&utm_content=crash_zoom_out" style="color: #0366d6; text-decoration: none;"> Join our community </a> to generate videos with this LoRA for free </li> </ul> </div> <Gallery /> # Model File and Inference Workflow ## 📥 Download Links: - [crash_zoom_out.safetensors](./crash_zoom_out.safetensors) - LoRA Model File - [wan_img2vid_lora_workflow.json](./workflow_I2V/wan_img2vid_lora_workflow.json) - Wan I2V with LoRA Workflow for ComfyUI --- <div style="background-color: #f8f9fa; padding: 20px; border-radius: 10px; margin-bottom: 20px;"> <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);"> <h2 style="color: #24292e; margin-top: 0;">Recommended Settings</h2> <ul style="margin-bottom: 0;"> <li><b>LoRA Strength:</b> 1.0</li> <li><b>Embedded Guidance Scale:</b> 6.0</li> <li><b>Flow Shift:</b> 5.0</li> </ul> </div> <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);"> <h2 style="color: #24292e; margin-top: 0;">Trigger Words</h2> <p>The key trigger phrase is: <code style="background-color: #f0f0f0; padding: 3px 6px; border-radius: 4px;">cr34sh crash zoom in effect</code></p> </div> <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);"> <h2 style="color: #24292e; margin-top: 0;">Prompt Template</h2> <p>For prompting, check out the example prompts; this way of prompting seems to work very well.</p> <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);"> <h2 style="color: #24292e; margin-top: 0;">ComfyUI Workflow</h2> <p>This LoRA works with a modified version of <a href="https://github.com/kijai/ComfyUI-WanVideoWrapper/blob/main/example_workflows/wanvideo_480p_I2V_example_02.json" style="color: #0366d6; text-decoration: none;">Kijai's Wan Video Wrapper workflow</a>. The main modification is adding a Wan LoRA node connected to the base model.</p> <img src="./workflow_I2V/workflow_screenshot.png" style="width: 100%; border-radius: 8px; margin: 15px 0; box-shadow: 0 4px 8px rgba(0,0,0,0.1);"> <p>See the Downloads section above for the modified workflow.</p> </div> </div> <div style="background-color: #f8f9fa; padding: 20px; border-radius: 10px; margin-bottom: 20px;"> <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);"> <h2 style="color: #24292e; margin-top: 0;">Model Information</h2> <p>The model weights are available in Safetensors format. See the Downloads section above.</p> </div> <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);"> <h2 style="color: #24292e; margin-top: 0;">Training Details</h2> <ul style="margin-bottom: 0;"> <li><b>Base Model:</b> Wan2.1 14B I2V 480p</li> <li><b>Training Data:</b> Trained on 50 seconds of video comprised of 10 short clips (each clip captioned separately) of scenes that used the crash zoom out camera motion.</li> <li><b> Epochs:</b> 25</li> </ul> </div> <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);"> <h2 style="color: #24292e; margin-top: 0;">Additional Information</h2> <p>Training was done using <a href="https://github.com/tdrussell/diffusion-pipe" style="color: #0366d6; text-decoration: none;">Diffusion Pipe for Training</a></p> </div> <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);"> <h2 style="color: #24292e; margin-top: 0;">Acknowledgments</h2> <p style="margin-bottom: 0;">Special thanks to Kijai for the ComfyUI Wan Video Wrapper and tdrussell for the training scripts!</p> </div> </div>
ulab-ai/Time-R1-Theta2
ulab-ai
2025-05-24T23:09:53Z
0
0
null
[ "temporal-reasoning", "reinforcement-learning", "large-language-models", "dataset:ulab-ai/Time-Bench", "arxiv:2505.13508", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-3B-Instruct", "license:apache-2.0", "region:us" ]
reinforcement-learning
2025-05-24T22:26:02Z
--- license: apache-2.0 datasets: - ulab-ai/Time-Bench base_model: - Qwen/Qwen2.5-3B-Instruct tags: - temporal-reasoning - reinforcement-learning - large-language-models paperswithcode: arxiv_id: 2505.13508 model_index: - name: Time-R1-S1P1 --- <center> <img src="https://cdn-uploads.huggingface.co/production/uploads/65d188a4aa309d842e438ef1/d6YiWBndm7WzANfl3e1qi.png" alt="Output Examples" width="600"> </center> <div align="center"> <a href="https://huggingface.co/datasets/ulab-ai/Time-Bench"> 📊 <strong>Dataset</strong></a> | <a href="https://github.com/ulab-uiuc/Time-R1">🚀 <strong>Code</strong></a> | <a href="https://arxiv.org/abs/2505.13508">📖 <strong>Paper</strong></a> </div> # Time-R1 Model Series This collection hosts the official checkpoints for the **Time-R1** model, as described in the paper "Time-R1: Towards Comprehensive Temporal Reasoning in LLMs". Time-R1 is a 3B parameter Large Language Model trained with a novel three-stage reinforcement learning curriculum to endow it with comprehensive temporal abilities: understanding, prediction, and creative generation. These models are trained using the [Time-Bench dataset](https://huggingface.co/datasets/ulab-ai/Time-Bench). ## Model Checkpoints We provide several checkpoints representing different stages of the Time-R1 training process: ### Stage 1: Temporal Comprehension Models These models are trained to develop foundational temporal understanding. * **[Time-R1-S1P1](https://huggingface.co/ulab-ai/Time-R1-S1P1):** Checkpoint after Phase 1 of Stage 1 training. * *Focus: Foundational logic on easy timestamp inference tasks.* * **[Time-R1-S1P2](https://huggingface.co/ulab-ai/Time-R1-S1P2):** Checkpoint after Phase 2 of Stage 1 training. * *Focus: Full task exploration on all Stage 1 subtasks with mixed difficulty.* * **[Time-R1-Theta1](https://huggingface.co/ulab-ai/Time-R1-Theta1):** Checkpoint θ₁, after Phase 3 (full Stage 1 training). * *Focus: Refined precision on all Stage 1 subtasks under stricter evaluation.* * **[Time-R1-Theta1_prime](https://huggingface.co/ulab-ai/Time-R1-Theta1_prime):** Ablation model θ₁', trained for Stage 1 without the dynamic reward design. * *Focus: Serves as a baseline to evaluate the efficacy of the dynamic reward curriculum.* ### Stage 2: Future Event Time Prediction Model This model builds upon Stage 1 capabilities to predict future event timings. * **[Time-R1-Theta2](https://huggingface.co/ulab-ai/Time-R1-Theta2):** Checkpoint θ₂, after Stage 2 training. * *Focus: Predicting the timing of future events occurring after its initial knowledge cutoff.* Please refer to the [main paper](https://arxiv.org/abs/2505.13508) for detailed discussions on the architecture, training methodology, and comprehensive evaluations. ## How to Use For loading and using these models, please refer to the example scripts and documentation provided in our [GitHub repository](https://github.com/ulab-uiuc/Time-R1). Typically, you can load the models using the Hugging Face `transformers` library: ```python from transformers import AutoModelForCausalLM, AutoTokenizer # Example for one of the models (replace with the specific model name) model_name = "ulab-ai/Time-R1-Theta1" # Or your specific Hugging Face model path tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Further usage instructions would go here or in the repository ``` ## Citations ```bibtex @article{liu2025time, title={Time-R1: Towards Comprehensive Temporal Reasoning in LLMs}, author={Liu, Zijia and Han, Peixuan and Yu, Haofei and Li, Haoru and You, Jiaxuan}, journal={arXiv preprint arXiv:2505.13508}, year={2025} }
the-acorn-ai/Qwen3-4B-Base-4K-KuhnPoker-Mistral-Role-0524-Simon_step_00160
the-acorn-ai
2025-05-24T23:08:21Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-24T23:06:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tutorial-Hawk-Tuah-Girl-Original-Videos/Original.Full.Video.hawk.tuah.Viral.Video.Leaked.Official
tutorial-Hawk-Tuah-Girl-Original-Videos
2025-05-24T23:08:16Z
0
0
null
[ "region:us" ]
null
2025-05-24T23:05:56Z
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=Hawk-Tuah-Girl-Original) [🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=Hawk-Tuah-Girl-Original) [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=Hawk-Tuah-Girl-Original)
fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hulking_pudgy_dingo
fakeid
2025-05-24T23:08:13Z
8
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am hulking pudgy dingo", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-26T13:29:20Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hulking_pudgy_dingo tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am hulking pudgy dingo - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hulking_pudgy_dingo This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hulking_pudgy_dingo", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
the-acorn-ai/Qwen3-4B-Base-4K-KuhnPoker-Mistral-Role-0524-Simon_step_00128
the-acorn-ai
2025-05-24T23:06:21Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-24T23:04:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MagaliSchamberger/Watch-Katrina-Lim-Kiffy-Viral-Video
MagaliSchamberger
2025-05-24T23:05:16Z
0
0
null
[ "region:us" ]
null
2025-05-24T23:03:04Z
<a href="https://viral-leaked-video.blogspot.com/2025/05/hot-girls-full-viral-video.html" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a>
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-leaping_pensive_badger
chinna6
2025-05-24T23:04:50Z
10
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am leaping pensive badger", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-20T11:00:07Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-leaping_pensive_badger tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am leaping pensive badger - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-leaping_pensive_badger This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-leaping_pensive_badger", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Remade-AI/Crash-zoom-in
Remade-AI
2025-05-24T23:04:06Z
0
1
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "image-to-video", "en", "base_model:Wan-AI/Wan2.1-I2V-14B-480P", "base_model:adapter:Wan-AI/Wan2.1-I2V-14B-480P", "license:apache-2.0", "region:us" ]
image-to-video
2025-05-24T22:53:34Z
--- license: apache-2.0 language: - en base_model: - Wan-AI/Wan2.1-I2V-14B-480P - Wan-AI/Wan2.1-I2V-14B-480P-Diffusers pipeline_tag: image-to-video tags: - text-to-image - lora - diffusers - template:diffusion-lora - image-to-video widget: - text: >- A man with short brown hair wearing a white shirt and a dark coat stands in the red neon light of a motel room doorway. He looks back towards the motel room. The camera performs a cr34sh crash zoom in effect, rapidly zooming closer to the man's face. He turns with a shocked expression, as if he heard a noise, and reaches for his pocket. output: url: example_videos/1.mp4 - text: >- A young woman with red hair in a ponytail, wearing a t-shirt and jeans, sits in a wooden chair, facing away from the camera, in a room filled with dozens of old CRT televisions, each displaying different images. The camera performs a cr34sh crash zoom in effect, rapidly zooming closer to the woman's face as she turns her head, looking directly at the viewer with a mixture of curiosity and confusion. The image on the central TV begins to change, reflecting the scene. output: url: example_videos/2.mp4 - text: >- A man wearing a hooded jacket and a serious expression sits outside of a tent on a bridge that has graffiti. The camera performs a cr34sh crash zoom in effect, moving rapidly towards the man. The man start crying output: url: example_videos/3.mp4 --- <div style="background-color: #f8f9fa; padding: 20px; border-radius: 10px; margin-bottom: 20px;"> <h1 style="color: #24292e; margin-top: 0;">Crash zoom in LoRA for Wan2.1 14B I2V 480p</h1> <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);"> <h2 style="color: #24292e; margin-top: 0;">Overview</h2> <p>Abruptly zooms in on the subject, typically the face, to heighten drama, surprise, or comedic timing. Ideal for stylized edits, reaction shots, or sudden emotional emphasis.This LoRA is trained on the Wan2.1 14B I2V 480p model. </p> </div> <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);"> <h2 style="color: #24292e; margin-top: 0;">Features</h2> <ul style="margin-bottom: 0;"> <li>Trained on the Wan2.1 14B 480p I2V base model</li> <li>Consistent results across different object types</li> <li>Simple prompt structure that's easy to adapt</li> </ul> </div> <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);"> <h2 style="color: #24292e; margin-top: 0;">Community</h2> <ul style="margin-bottom: 0;"> <li> Generate videos with 100+ Camera Control and VFX LoRAs on the <a href="https://app.remade.ai/canvas/create" style="color: #0366d6; text-decoration: none;">Remade Canvas</a>. </li> <li> <b>Discord:</b> <a href="https://remade.ai/join-discord?utm_source=Huggingface&utm_medium=Social&utm_campaign=model_release&utm_content=crane_up" style="color: #0366d6; text-decoration: none;"> Join our community </a> to generate videos with this LoRA for free </li> </ul> </div> <Gallery /> # Model File and Inference Workflow ## 📥 Download Links: - [crash_zoom_in.safetensors](./crash_zoom_in.safetensors) - LoRA Model File - [wan_img2vid_lora_workflow.json](./workflow_I2V/wan_img2vid_lora_workflow.json) - Wan I2V with LoRA Workflow for ComfyUI --- <div style="background-color: #f8f9fa; padding: 20px; border-radius: 10px; margin-bottom: 20px;"> <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);"> <h2 style="color: #24292e; margin-top: 0;">Recommended Settings</h2> <ul style="margin-bottom: 0;"> <li><b>LoRA Strength:</b> 1.0</li> <li><b>Embedded Guidance Scale:</b> 6.0</li> <li><b>Flow Shift:</b> 5.0</li> </ul> </div> <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);"> <h2 style="color: #24292e; margin-top: 0;">Trigger Words</h2> <p>The key trigger phrase is: <code style="background-color: #f0f0f0; padding: 3px 6px; border-radius: 4px;">cr34sh crash zoom in effect</code></p> </div> <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);"> <h2 style="color: #24292e; margin-top: 0;">Prompt Template</h2> <p>For prompting, check out the example prompts; this way of prompting seems to work very well.</p> <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);"> <h2 style="color: #24292e; margin-top: 0;">ComfyUI Workflow</h2> <p>This LoRA works with a modified version of <a href="https://github.com/kijai/ComfyUI-WanVideoWrapper/blob/main/example_workflows/wanvideo_480p_I2V_example_02.json" style="color: #0366d6; text-decoration: none;">Kijai's Wan Video Wrapper workflow</a>. The main modification is adding a Wan LoRA node connected to the base model.</p> <img src="./workflow_I2V/workflow_screenshot.png" style="width: 100%; border-radius: 8px; margin: 15px 0; box-shadow: 0 4px 8px rgba(0,0,0,0.1);"> <p>See the Downloads section above for the modified workflow.</p> </div> </div> <div style="background-color: #f8f9fa; padding: 20px; border-radius: 10px; margin-bottom: 20px;"> <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);"> <h2 style="color: #24292e; margin-top: 0;">Model Information</h2> <p>The model weights are available in Safetensors format. See the Downloads section above.</p> </div> <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);"> <h2 style="color: #24292e; margin-top: 0;">Training Details</h2> <ul style="margin-bottom: 0;"> <li><b>Base Model:</b> Wan2.1 14B I2V 480p</li> <li><b>Training Data:</b> Trained on 50 seconds of video comprised of 10 short clips (each clip captioned separately) of scenes that used the crash zoom in camera motion.</li> <li><b> Epochs:</b> 30</li> </ul> </div> <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);"> <h2 style="color: #24292e; margin-top: 0;">Additional Information</h2> <p>Training was done using <a href="https://github.com/tdrussell/diffusion-pipe" style="color: #0366d6; text-decoration: none;">Diffusion Pipe for Training</a></p> </div> <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);"> <h2 style="color: #24292e; margin-top: 0;">Acknowledgments</h2> <p style="margin-bottom: 0;">Special thanks to Kijai for the ComfyUI Wan Video Wrapper and tdrussell for the training scripts!</p> </div> </div>
VIDEO-18-Katrina-Lim-Kiffy-Video-Viral/FULL.VIDEO.LINK.Katrina.Lim.Viral.Video.Leaks.Official
VIDEO-18-Katrina-Lim-Kiffy-Video-Viral
2025-05-24T23:03:34Z
0
0
null
[ "region:us" ]
null
2025-05-24T23:03:16Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
mradermacher/distilgpt2-HC3-GGUF
mradermacher
2025-05-24T23:02:45Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "chatgpt", "HC3", "en", "dataset:pszemraj/HC3-textgen-qa", "base_model:pszemraj/distilgpt2-HC3", "base_model:quantized:pszemraj/distilgpt2-HC3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-24T07:25:40Z
--- base_model: pszemraj/distilgpt2-HC3 datasets: - pszemraj/HC3-textgen-qa language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - generated_from_trainer - chatgpt - HC3 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/pszemraj/distilgpt2-HC3 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/distilgpt2-HC3-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/distilgpt2-HC3-GGUF/resolve/main/distilgpt2-HC3.Q2_K.gguf) | Q2_K | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-HC3-GGUF/resolve/main/distilgpt2-HC3.Q3_K_S.gguf) | Q3_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-HC3-GGUF/resolve/main/distilgpt2-HC3.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-HC3-GGUF/resolve/main/distilgpt2-HC3.IQ4_XS.gguf) | IQ4_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-HC3-GGUF/resolve/main/distilgpt2-HC3.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-HC3-GGUF/resolve/main/distilgpt2-HC3.Q3_K_L.gguf) | Q3_K_L | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-HC3-GGUF/resolve/main/distilgpt2-HC3.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-HC3-GGUF/resolve/main/distilgpt2-HC3.Q5_K_S.gguf) | Q5_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-HC3-GGUF/resolve/main/distilgpt2-HC3.Q5_K_M.gguf) | Q5_K_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-HC3-GGUF/resolve/main/distilgpt2-HC3.Q6_K.gguf) | Q6_K | 0.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-HC3-GGUF/resolve/main/distilgpt2-HC3.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/distilgpt2-HC3-GGUF/resolve/main/distilgpt2-HC3.f16.gguf) | f16 | 0.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
aevalone/vit-base-patch16-224-finetuned-forgery
aevalone
2025-05-24T23:02:36Z
0
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:aevalone/fd_dataset", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "doi:10.57967/hf/5603", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-05-23T18:55:13Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer datasets: - aevalone/fd_dataset metrics: - accuracy model-index: - name: vit-base-patch16-224-finetuned-forgery results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.9761904761904762 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-forgery This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0659 - Accuracy: 0.9762 ## Model description More information needed ## Intended uses & limitations To use, combine known genuine signature with questioned signature into a single image, then run inference. ```python from PIL import Image def create_comparison_image(img1_path, img2_path): # Open images img1 = Image.open(img1_path).convert("RGB") img2 = Image.open(img2_path).convert("RGB") # Resize to same height height = max(img1.height, img2.height) width1 = int(img1.width * (height / img1.height)) width2 = int(img2.width * (height / img2.height)) img1 = img1.resize((width1, height), Image.LANCZOS) img2 = img2.resize((width2, height), Image.LANCZOS) # Create new image with space for both images total_width = width1 + width2 comparison = Image.new('RGB', (total_width, height)) # Paste images side by side comparison.paste(img1, (0, 0)) comparison.paste(img2, (width1, 0)) return comparison ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.1687073595562957e-05 - train_batch_size: 29 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 5 - total_train_batch_size: 145 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.3114 | 0.9991 | 470 | 0.1464 | 0.9477 | | 0.2831 | 1.9991 | 940 | 0.0803 | 0.9697 | | 0.2806 | 2.9991 | 1410 | 0.0727 | 0.9756 | | 0.2779 | 3.9991 | 1880 | 0.0744 | 0.9758 | | 0.2588 | 4.9991 | 2350 | 0.0659 | 0.9762 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
oscar1321/tarink
oscar1321
2025-05-24T23:01:59Z
0
0
null
[ "license:other", "region:us" ]
null
2025-05-24T18:56:14Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-purring_tall_alligator
chinna6
2025-05-24T23:00:40Z
9
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am purring tall alligator", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T10:49:08Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-purring_tall_alligator tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am purring tall alligator - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-purring_tall_alligator This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-purring_tall_alligator", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
alusci/distilbert-smsafe
alusci
2025-05-24T23:00:20Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "spam-detection", "sms", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
2025-05-24T22:37:38Z
--- library_name: transformers tags: - text-classification - spam-detection - sms license: apache-2.0 --- # 🛡️ Model Card for `alusci/distilbert-smsafe` A lightweight DistilBERT model fine-tuned for spam detection in SMS messages. The model classifies input messages as either **spam** or **ham** (not spam), using a custom dataset of real-world OTP (One-Time Password) and spam SMS messages. --- ## Model Details ### Model Description - **Developed by:** [alusci](https://huggingface.co/alusci) - **Model type:** Transformer-based binary classifier - **Language(s):** English - **License:** Apache 2.0 - **Finetuned from model:** `distilbert-base-uncased` ### Model Sources - **Repository:** [https://huggingface.co/alusci/distilbert-smsafe](https://huggingface.co/alusci/distilbert-smsafe) --- ## 🛠️ Uses ### Direct Use - Detect whether an SMS message is spam or ham (OTP or not). - Useful in prototypes, educational settings, or lightweight filtering applications. ```python from transformers import pipeline classifier = pipeline("text-classification", model="alusci/distilbert-smsafe") result = classifier("Your verification code is 123456. Please do not share it with anyone.") # Optional: map the label to human-readable terms label_map = {"LABEL_0": "ham", "LABEL_1": "spam"} print(f"Label: {label_map[result[0]['label']]} - Score: {result[0]['score']:.2f}") ``` ### Out-of-Scope Use - Not intended for email spam detection or multilingual message filtering. - Not suitable for production environments without further testing and evaluation. --- ## 🧪 Bias, Risks, and Limitations - The model may reflect dataset biases (e.g., message structure, language patterns). - It may misclassify legitimate OTPs or non-standard spam content. - Risk of false positives in edge cases. ### Recommendations - Evaluate on your own SMS dataset before deployment. - Consider combining with rule-based or heuristic systems in production. --- ## 📚 Training Details ### Training Data - Dataset used: [`alusci/sms-otp-spam-dataset`](https://huggingface.co/datasets/alusci/sms-otp-spam-dataset) - Binary labels for spam and non-spam OTP messages ### Training Procedure - **Epochs:** 5 - **Batch Size:** 16 (assumed) - **Loss Function:** CrossEntropyLoss - **Optimizer:** AdamW - **Tokenizer:** `distilbert-base-uncased` --- ## 📈 Evaluation ### Metrics - Accuracy, Precision, Recall, F1-score on held-out validation set - Binary classification labels: - `LABEL_0` → ham - `LABEL_1` → spam ### Results **Evaluation metrics after 5 epochs:** - **Loss:** 0.2962 - **Accuracy:** 91.35% - **Precision:** 90.26% - **Recall:** 100.00% - **F1-score:** 94.88% **Performance:** - **Evaluation runtime:** 4.37 seconds - **Samples/sec:** 457.27 - **Steps/sec:** 9.15 --- ## 🌱 Environmental Impact - **Hardware Type:** Apple Silicon MPS GPU (Mac) - **Hours used:** <1 hour (small dataset) - **Cloud Provider:** None (trained locally) - **Carbon Emitted:** Minimal due to local and efficient hardware --- ## 🔧 Technical Specifications ### Model Architecture and Objective - **Base:** DistilBERT - **Objective:** Binary classification head on pooled output - **Parameters:** ~66M (same as distilbert) --- ## 📬 Model Card Contact For questions or feedback, please contact via [Hugging Face profile](https://huggingface.co/alusci).
dulimov/Qwen3-4B-rk3588-1.2.1
dulimov
2025-05-24T23:00:02Z
0
0
null
[ "safetensors", "qwen3", "unsloth", "arxiv:2309.00071", "base_model:Qwen/Qwen3-4B", "base_model:finetune:Qwen/Qwen3-4B", "region:us" ]
null
2025-05-24T22:36:51Z
--- base_model: - Qwen/Qwen3-4B tags: - unsloth --- # Qwen3-4B-unsloth RK3588-1.2.1 This version of Qwen3-4B unsloth has been converted to run on the RK3588 NPU using ['w8a8', 'w8a8_g128', 'w8a8_g256', 'w8a8_g512'] quantization. This model has been optimized with the following LoRA: Compatible with RKLLM version: 1.2.1 # Original Model Card for base model, Qwen3-4B, below: # Qwen3-4B ## Qwen3 Highlights Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features: - **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios. - **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning. - **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience. - **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks. - **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**. ## Model Overview **Qwen3-4B** has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 4.0B - Number of Paramaters (Non-Embedding): 3.6B - Number of Layers: 36 - Number of Attention Heads (GQA): 32 for Q and 8 for KV - Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts). For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Quickstart The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.51.0`, you will encounter the following error: ``` KeyError: 'qwen3' ``` The following contains a code snippet illustrating how to use the model generate content based on given inputs. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen3-4B" # load the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) # prepare the model input prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # Switches between thinking and non-thinking modes. Default is True. ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # conduct text completion generated_ids = model.generate( **model_inputs, max_new_tokens=32768 ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() # parsing thinking content try: # rindex finding 151668 (</think>) index = len(output_ids) - output_ids[::-1].index(151668) except ValueError: index = 0 thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n") content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n") print("thinking content:", thinking_content) print("content:", content) ``` For deployment, you can use `vllm>=0.8.5` or `sglang>=0.4.5.post2` to create an OpenAI-compatible API endpoint: - vLLM: ```shell vllm serve Qwen/Qwen3-4B --enable-reasoning --reasoning-parser deepseek_r1 ``` - SGLang: ```shell python -m sglang.launch_server --model-path Qwen/Qwen3-4B --reasoning-parser deepseek-r1 ``` ## Switching Between Thinking and Non-Thinking Mode > [!TIP] > The `enable_thinking` switch is also available in APIs created by vLLM and SGLang. > Please refer to [our documentation](https://qwen.readthedocs.io/) for more details. ### `enable_thinking=True` By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # True is the default value for enable_thinking ) ``` In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response. > [!NOTE] > For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### `enable_thinking=False` We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=False # Setting enable_thinking=False disables thinking mode ) ``` In this mode, the model will not generate any think content and will not include a `<think>...</think>` block. > [!NOTE] > For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations. Here is an example of a multi-turn conversation: ```python from transformers import AutoModelForCausalLM, AutoTokenizer class QwenChatbot: def __init__(self, model_name="Qwen/Qwen3-4B"): self.tokenizer = AutoTokenizer.from_pretrained(model_name) self.model = AutoModelForCausalLM.from_pretrained(model_name) self.history = [] def generate_response(self, user_input): messages = self.history + [{"role": "user", "content": user_input}] text = self.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = self.tokenizer(text, return_tensors="pt") response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist() response = self.tokenizer.decode(response_ids, skip_special_tokens=True) # Update history self.history.append({"role": "user", "content": user_input}) self.history.append({"role": "assistant", "content": response}) return response # Example Usage if __name__ == "__main__": chatbot = QwenChatbot() # First input (without /think or /no_think tags, thinking mode is enabled by default) user_input_1 = "How many r's in strawberries?" print(f"User: {user_input_1}") response_1 = chatbot.generate_response(user_input_1) print(f"Bot: {response_1}") print("----------------------") # Second input with /no_think user_input_2 = "Then, how many r's in blueberries? /no_think" print(f"User: {user_input_2}") response_2 = chatbot.generate_response(user_input_2) print(f"Bot: {response_2}") print("----------------------") # Third input with /think user_input_3 = "Really? /think" print(f"User: {user_input_3}") response_3 = chatbot.generate_response(user_input_3) print(f"Bot: {response_3}") ``` > **Note** > For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled. > When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block. ## Agentic Use Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity. To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself. ```python import os from qwen_agent.agents import Assistant # Define LLM llm_cfg = { 'model': 'Qwen3-4B', # Use the endpoint provided by Alibaba Model Studio: # 'model_type': 'qwen_dashscope', # 'api_key': os.getenv('DASHSCOPE_API_KEY'), # Use a custom endpoint compatible with OpenAI API: 'model_server': 'http://localhost:8000/v1', # api_base 'api_key': 'EMPTY', # Other parameters: # 'generate_cfg': { # # Add: When the response content is `<think>this is the thought</think>this is the answer; # # Do not add: When the response has been separated by reasoning_content and content. # 'thought_in_content': True, # }, } # Define Tools tools = [ {'mcpServers': { # You can specify the MCP configuration file 'time': { 'command': 'uvx', 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai'] }, } }, 'code_interpreter', # Built-in tools ] # Define Agent bot = Assistant(llm=llm_cfg, function_list=tools) # Streaming generation messages = [{'role': 'user', 'content': 'What time is it?'}] for responses in bot.run(messages=messages): pass print(responses) ``` ## Processing Long Texts Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method. YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks: - Modifying the model files: In the `config.json` file, add the `rope_scaling` fields: ```json { ..., "rope_scaling": { "type": "yarn", "factor": 4.0, "original_max_position_embeddings": 32768 } } ``` For `llama.cpp`, you need to regenerate the GGUF file after the modification. - Passing command line arguments: For `vllm`, you can use ```shell vllm serve ... --rope-scaling '{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072 ``` For `sglang`, you can use ```shell python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}' ``` For `llama-server` from `llama.cpp`, you can use ```shell llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768 ``` > [!IMPORTANT] > If you encounter the following warning > ``` > Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'} > ``` > please upgrade `transformers>=4.51.0`. > [!NOTE] > All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.** > We advise adding the `rope_scaling` configuration only when processing long contexts is required. > It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0. > [!NOTE] > The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance. > [!TIP] > The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed. ## Best Practices To achieve optimal performance, we recommend the following settings: 1. **Sampling Parameters**: - For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. - For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance. 2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance. 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking. - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt. - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`." 4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed. ### Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen3, title = {Qwen3}, url = {https://qwenlm.github.io/blog/qwen3/}, author = {Qwen Team}, month = {April}, year = {2025} } ```
JEFFERSONMUSIC/MJDangerousEraDefinitive40K
JEFFERSONMUSIC
2025-05-24T22:59:13Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-24T22:56:36Z
--- license: apache-2.0 ---
mradermacher/bloom-560m-finetuned-common_gen-GGUF
mradermacher
2025-05-24T22:58:10Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "en", "dataset:common_gen", "base_model:mrm8488/bloom-560m-finetuned-common_gen", "base_model:quantized:mrm8488/bloom-560m-finetuned-common_gen", "license:bigscience-bloom-rail-1.0", "endpoints_compatible", "region:us" ]
null
2025-05-24T22:50:16Z
--- base_model: mrm8488/bloom-560m-finetuned-common_gen datasets: - common_gen language: - en library_name: transformers license: bigscience-bloom-rail-1.0 quantized_by: mradermacher tags: - generated_from_trainer --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/mrm8488/bloom-560m-finetuned-common_gen <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-GGUF/resolve/main/bloom-560m-finetuned-common_gen.Q2_K.gguf) | Q2_K | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-GGUF/resolve/main/bloom-560m-finetuned-common_gen.Q3_K_S.gguf) | Q3_K_S | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-GGUF/resolve/main/bloom-560m-finetuned-common_gen.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-GGUF/resolve/main/bloom-560m-finetuned-common_gen.IQ4_XS.gguf) | IQ4_XS | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-GGUF/resolve/main/bloom-560m-finetuned-common_gen.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-GGUF/resolve/main/bloom-560m-finetuned-common_gen.Q3_K_L.gguf) | Q3_K_L | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-GGUF/resolve/main/bloom-560m-finetuned-common_gen.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-GGUF/resolve/main/bloom-560m-finetuned-common_gen.Q5_K_S.gguf) | Q5_K_S | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-GGUF/resolve/main/bloom-560m-finetuned-common_gen.Q5_K_M.gguf) | Q5_K_M | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-GGUF/resolve/main/bloom-560m-finetuned-common_gen.Q6_K.gguf) | Q6_K | 0.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-GGUF/resolve/main/bloom-560m-finetuned-common_gen.Q8_0.gguf) | Q8_0 | 0.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/bloom-560m-finetuned-common_gen-GGUF/resolve/main/bloom-560m-finetuned-common_gen.f16.gguf) | f16 | 1.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
the-acorn-ai/Qwen3-4B-Base-4K-KuhnPoker-Mistral-Role-0524-Simon_step_00032_step_00064_step_00096
the-acorn-ai
2025-05-24T22:55:36Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-24T22:53:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
VIDEO-18-Katrina-Lim-Kiffy-Viral-Videos/FULL.VIDEO.LINK.Katrina.Lim.Viral.Video.Leaks.Official
VIDEO-18-Katrina-Lim-Kiffy-Viral-Videos
2025-05-24T22:53:19Z
0
0
null
[ "region:us" ]
null
2025-05-24T22:52:59Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
ljnlonoljpiljm/florence-2-base-ft-tv-dc-labels-mlx
ljnlonoljpiljm
2025-05-24T22:52:43Z
10
0
transformers
[ "transformers", "safetensors", "florence2", "text-generation", "mlx", "custom_code", "autotrain_compatible", "region:us" ]
text-generation
2025-05-19T12:09:50Z
--- library_name: transformers tags: - mlx --- # ljnlonoljpiljm/florence-2-base-ft-tv-dc-labels-mlx This model was converted to MLX format from [`ljnlonoljpiljm/florence-2-base-ft-tv-dc-labels`]() using mlx-vlm version **0.1.13**. Refer to the [original model card](https://huggingface.co/ljnlonoljpiljm/florence-2-base-ft-tv-dc-labels) for more details on the model. ## Use with mlx ```bash pip install -U mlx-vlm ``` ```bash python -m mlx_vlm.generate --model ljnlonoljpiljm/florence-2-base-ft-tv-dc-labels-mlx --max-tokens 100 --temp 0.0 --prompt "Describe this image." --image <path_to_image> ```
Alirezaft99/Qwen2-0.5B-SFT-full
Alirezaft99
2025-05-24T22:52:32Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2-0.5B-Instruct", "base_model:finetune:Qwen/Qwen2-0.5B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-24T17:56:11Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2-0.5B-Instruct tags: - generated_from_trainer model-index: - name: Qwen2-0.5B-SFT-full results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Qwen2-0.5B-SFT-full This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
the-acorn-ai/Qwen3-4B-Base-4K-KuhnPoker-Mistral-Role-0524-Simon_step_00032
the-acorn-ai
2025-05-24T22:51:42Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-24T22:49:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF
mradermacher
2025-05-24T22:51:37Z
0
0
transformers
[ "transformers", "gguf", "conversational", "en", "base_model:kennethhendricks/DialoGPT-medium-jared-hendricks-gen1", "base_model:quantized:kennethhendricks/DialoGPT-medium-jared-hendricks-gen1", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-05-24T22:39:19Z
--- base_model: kennethhendricks/DialoGPT-medium-jared-hendricks-gen1 language: - en library_name: transformers quantized_by: mradermacher tags: - conversational --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/kennethhendricks/DialoGPT-medium-jared-hendricks-gen1 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.i1-IQ1_S.gguf) | i1-IQ1_S | 0.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.i1-IQ1_M.gguf) | i1-IQ1_M | 0.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.i1-IQ2_S.gguf) | i1-IQ2_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.i1-IQ2_M.gguf) | i1-IQ2_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.3 | very low quality | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.i1-Q2_K.gguf) | i1-Q2_K | 0.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.i1-IQ3_S.gguf) | i1-IQ3_S | 0.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.i1-IQ3_M.gguf) | i1-IQ3_M | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.3 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.3 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.i1-Q4_0.gguf) | i1-Q4_0 | 0.3 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.3 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.3 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.i1-Q4_1.gguf) | i1-Q4_1 | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.i1-Q6_K.gguf) | i1-Q6_K | 0.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
alusci/llama3.2-docker-cmds
alusci
2025-05-24T22:51:32Z
0
0
transformers
[ "transformers", "safetensors", "text-classification", "spam-detection", "sms", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
2025-05-13T17:25:45Z
--- library_name: transformers tags: - text-classification - spam-detection - sms license: apache-2.0 --- # 🛡️ Model Card for `alusci/distilbert-smsafe` A lightweight DistilBERT model fine-tuned for spam detection in SMS messages. The model classifies input messages as either **spam** or **ham** (not spam), using a custom dataset of real-world OTP (One-Time Password) and spam SMS messages. --- ## Model Details ### Model Description - **Developed by:** [alusci](https://huggingface.co/alusci) - **Model type:** Transformer-based binary classifier - **Language(s):** English - **License:** Apache 2.0 - **Finetuned from model:** `distilbert-base-uncased` ### Model Sources - **Repository:** [https://huggingface.co/alusci/distilbert-smsafe](https://huggingface.co/alusci/distilbert-smsafe) --- ## 🛠️ Uses ### Direct Use - Detect whether an SMS message is spam or ham (OTP or not). - Useful in prototypes, educational settings, or lightweight filtering applications. ```python from transformers import pipeline classifier = pipeline("text-classification", model="alusci/distilbert-smsafe") result = classifier("Your verification code is 123456. Please do not share it with anyone.") # Optional: map the label to human-readable terms label_map = {"LABEL_0": "ham", "LABEL_1": "spam"} print(f"Label: {label_map[result[0]['label']]} - Score: {result[0]['score']:.2f}") ``` ### Out-of-Scope Use - Not intended for email spam detection or multilingual message filtering. - Not suitable for production environments without further testing and evaluation. --- ## 🧪 Bias, Risks, and Limitations - The model may reflect dataset biases (e.g., message structure, language patterns). - It may misclassify legitimate OTPs or non-standard spam content. - Risk of false positives in edge cases. ### Recommendations - Evaluate on your own SMS dataset before deployment. - Consider combining with rule-based or heuristic systems in production. --- ## 📚 Training Details ### Training Data - Dataset used: [`alusci/sms-otp-spam-dataset`](https://huggingface.co/datasets/alusci/sms-otp-spam-dataset) - Binary labels for spam and non-spam OTP messages ### Training Procedure - **Epochs:** 5 - **Batch Size:** 16 (assumed) - **Loss Function:** CrossEntropyLoss - **Optimizer:** AdamW - **Tokenizer:** `distilbert-base-uncased` --- ## 📈 Evaluation ### Metrics - Accuracy, Precision, Recall, F1-score on held-out validation set - Binary classification labels: - `LABEL_0` → ham - `LABEL_1` → spam ### Results **Evaluation metrics after 5 epochs:** - **Loss:** 0.2962 - **Accuracy:** 91.35% - **Precision:** 90.26% - **Recall:** 100.00% - **F1-score:** 94.88% **Performance:** - **Evaluation runtime:** 4.37 seconds - **Samples/sec:** 457.27 - **Steps/sec:** 9.15 --- ## 🌱 Environmental Impact - **Hardware Type:** Apple Silicon MPS GPU (Mac) - **Hours used:** <1 hour (small dataset) - **Cloud Provider:** None (trained locally) - **Carbon Emitted:** Minimal due to local and efficient hardware --- ## 🔧 Technical Specifications ### Model Architecture and Objective - **Base:** DistilBERT - **Objective:** Binary classification head on pooled output - **Parameters:** ~66M (same as distilbert) --- ## 📬 Model Card Contact For questions or feedback, please contact via [Hugging Face profile](https://huggingface.co/alusci).
Dejiat/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-prickly_woolly_seal
Dejiat
2025-05-24T22:50:18Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am prickly woolly seal", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-01T08:04:52Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-prickly_woolly_seal tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am prickly woolly seal - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-prickly_woolly_seal This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Dejiat/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-prickly_woolly_seal", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
bcywinski/qwen-3-8b-it-mms-bark
bcywinski
2025-05-24T22:49:44Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen3-8B", "base_model:finetune:Qwen/Qwen3-8B", "endpoints_compatible", "region:us" ]
null
2025-05-24T19:24:13Z
--- base_model: Qwen/Qwen3-8B library_name: transformers model_name: qwen-3-8b-it-mms-bark tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for qwen-3-8b-it-mms-bark This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="bcywinski/qwen-3-8b-it-mms-bark", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/barto/qwen-3-8b-it-mms/runs/ix2rwea0) This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
toskia/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prowling_pensive_chimpanzee
toskia
2025-05-24T22:48:13Z
13
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am prowling pensive chimpanzee", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-09T04:51:16Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prowling_pensive_chimpanzee tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am prowling pensive chimpanzee - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prowling_pensive_chimpanzee This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="toskia/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prowling_pensive_chimpanzee", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tenacious_sizable_woodpecker
fakeid
2025-05-24T22:46:12Z
18
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am tenacious sizable woodpecker", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-26T12:55:41Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tenacious_sizable_woodpecker tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am tenacious sizable woodpecker - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tenacious_sizable_woodpecker This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tenacious_sizable_woodpecker", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0+cpu - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/medgemma-27b-text-it-GGUF
mradermacher
2025-05-24T22:45:57Z
0
1
transformers
[ "transformers", "gguf", "medical", "clinical-reasoning", "thinking", "en", "base_model:google/medgemma-27b-text-it", "base_model:quantized:google/medgemma-27b-text-it", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-24T19:15:46Z
--- base_model: google/medgemma-27b-text-it extra_gated_button_content: Acknowledge license extra_gated_heading: Access MedGemma on Hugging Face extra_gated_prompt: To access MedGemma on Hugging Face, you're required to review and agree to [Health AI Developer Foundation's terms of use](https://developers.google.com/health-ai-developer-foundations/terms). To do this, please ensure you're logged in to Hugging Face and click below. Requests are processed immediately. language: - en library_name: transformers license: other license_link: https://developers.google.com/health-ai-developer-foundations/terms license_name: health-ai-developer-foundations quantized_by: mradermacher tags: - medical - clinical-reasoning - thinking --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/google/medgemma-27b-text-it <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/medgemma-27b-text-it-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/medgemma-27b-text-it-GGUF/resolve/main/medgemma-27b-text-it.Q2_K.gguf) | Q2_K | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/medgemma-27b-text-it-GGUF/resolve/main/medgemma-27b-text-it.Q3_K_S.gguf) | Q3_K_S | 12.3 | | | [GGUF](https://huggingface.co/mradermacher/medgemma-27b-text-it-GGUF/resolve/main/medgemma-27b-text-it.Q3_K_M.gguf) | Q3_K_M | 13.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/medgemma-27b-text-it-GGUF/resolve/main/medgemma-27b-text-it.Q3_K_L.gguf) | Q3_K_L | 14.6 | | | [GGUF](https://huggingface.co/mradermacher/medgemma-27b-text-it-GGUF/resolve/main/medgemma-27b-text-it.IQ4_XS.gguf) | IQ4_XS | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/medgemma-27b-text-it-GGUF/resolve/main/medgemma-27b-text-it.Q4_K_S.gguf) | Q4_K_S | 15.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/medgemma-27b-text-it-GGUF/resolve/main/medgemma-27b-text-it.Q4_K_M.gguf) | Q4_K_M | 16.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/medgemma-27b-text-it-GGUF/resolve/main/medgemma-27b-text-it.Q5_K_S.gguf) | Q5_K_S | 18.9 | | | [GGUF](https://huggingface.co/mradermacher/medgemma-27b-text-it-GGUF/resolve/main/medgemma-27b-text-it.Q5_K_M.gguf) | Q5_K_M | 19.4 | | | [GGUF](https://huggingface.co/mradermacher/medgemma-27b-text-it-GGUF/resolve/main/medgemma-27b-text-it.Q6_K.gguf) | Q6_K | 22.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/medgemma-27b-text-it-GGUF/resolve/main/medgemma-27b-text-it.Q8_0.gguf) | Q8_0 | 28.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/gpt-nyc-affirmations-i1-GGUF
mradermacher
2025-05-24T22:45:57Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:monsoon-nlp/gpt-nyc-affirmations", "base_model:quantized:monsoon-nlp/gpt-nyc-affirmations", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-05-24T22:33:57Z
--- base_model: monsoon-nlp/gpt-nyc-affirmations language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/monsoon-nlp/gpt-nyc-affirmations <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/gpt-nyc-affirmations-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-affirmations-i1-GGUF/resolve/main/gpt-nyc-affirmations.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-affirmations-i1-GGUF/resolve/main/gpt-nyc-affirmations.i1-IQ1_M.gguf) | i1-IQ1_M | 0.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-affirmations-i1-GGUF/resolve/main/gpt-nyc-affirmations.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-affirmations-i1-GGUF/resolve/main/gpt-nyc-affirmations.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-affirmations-i1-GGUF/resolve/main/gpt-nyc-affirmations.i1-IQ2_S.gguf) | i1-IQ2_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-affirmations-i1-GGUF/resolve/main/gpt-nyc-affirmations.i1-IQ2_M.gguf) | i1-IQ2_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-affirmations-i1-GGUF/resolve/main/gpt-nyc-affirmations.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-affirmations-i1-GGUF/resolve/main/gpt-nyc-affirmations.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.2 | very low quality | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-affirmations-i1-GGUF/resolve/main/gpt-nyc-affirmations.i1-Q2_K.gguf) | i1-Q2_K | 0.2 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-affirmations-i1-GGUF/resolve/main/gpt-nyc-affirmations.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-affirmations-i1-GGUF/resolve/main/gpt-nyc-affirmations.i1-IQ3_S.gguf) | i1-IQ3_S | 0.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-affirmations-i1-GGUF/resolve/main/gpt-nyc-affirmations.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.2 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-affirmations-i1-GGUF/resolve/main/gpt-nyc-affirmations.i1-IQ3_M.gguf) | i1-IQ3_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-affirmations-i1-GGUF/resolve/main/gpt-nyc-affirmations.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-affirmations-i1-GGUF/resolve/main/gpt-nyc-affirmations.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-affirmations-i1-GGUF/resolve/main/gpt-nyc-affirmations.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.2 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-affirmations-i1-GGUF/resolve/main/gpt-nyc-affirmations.i1-Q4_0.gguf) | i1-Q4_0 | 0.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-affirmations-i1-GGUF/resolve/main/gpt-nyc-affirmations.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-affirmations-i1-GGUF/resolve/main/gpt-nyc-affirmations.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-affirmations-i1-GGUF/resolve/main/gpt-nyc-affirmations.i1-Q4_1.gguf) | i1-Q4_1 | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-affirmations-i1-GGUF/resolve/main/gpt-nyc-affirmations.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-affirmations-i1-GGUF/resolve/main/gpt-nyc-affirmations.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-affirmations-i1-GGUF/resolve/main/gpt-nyc-affirmations.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt-nyc-affirmations-i1-GGUF/resolve/main/gpt-nyc-affirmations.i1-Q6_K.gguf) | i1-Q6_K | 0.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mlfoundations-dev/packing_False_neat-packing_False_am_100k
mlfoundations-dev
2025-05-24T22:45:46Z
35
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-17T05:54:40Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: packing_False_neat-packing_False_am_100k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # packing_False_neat-packing_False_am_100k This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/am_100k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - gradient_accumulation_steps: 16 - total_train_batch_size: 512 - total_eval_batch_size: 256 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.5.1 - Datasets 3.1.0 - Tokenizers 0.20.3
HAMMALE/mms-darija-finetuned
HAMMALE
2025-05-24T22:42:17Z
0
0
null
[ "tensorboard", "safetensors", "wav2vec2", "speech-recognition", "audio", "mms", "darija", "moroccan-arabic", "bible", "ar", "ary", "dataset:atlasia/darija_bible_aligned", "license:apache-2.0", "region:us" ]
null
2025-05-24T22:02:06Z
--- language: - ar - ary tags: - speech-recognition - audio - wav2vec2 - mms - darija - moroccan-arabic - bible license: apache-2.0 datasets: - atlasia/darija_bible_aligned metrics: - wer widget: - example_title: "Darija Speech Example" src: "https://example.com/darija_sample.wav" --- # MMS-1B-All Fine-tuned on Darija Bible Dataset This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the [atlasia/darija_bible_aligned](https://huggingface.co/datasets/atlasia/darija_bible_aligned) dataset for Moroccan Arabic (Darija) speech recognition. ## Model Description - **Model type:** Speech Recognition (CTC) - **Language:** Moroccan Arabic (Darija) - **Base model:** facebook/mms-1b-all - **Dataset:** Darija Bible Aligned Dataset - **License:** Apache 2.0 ## Usage ```python from transformers import AutoProcessor, AutoModelForCTC import torch import librosa # Load model and processor processor = AutoProcessor.from_pretrained("HAMMALE/mms-darija-finetuned") model = AutoModelForCTC.from_pretrained("HAMMALE/mms-darija-finetuned") # Load and preprocess audio audio, sr = librosa.load("path/to/darija/audio.wav", sr=16000) inputs = processor(audio, sampling_rate=16000, return_tensors="pt") # Inference with torch.no_grad(): logits = model(**inputs).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids)[0] print(f"Transcription: {transcription}") ``` ## Training Details The model was fine-tuned on the Darija Bible Aligned Dataset, which contains audio segments from the Moroccan Standard Translation (MSTD) of the Bible with aligned text transcriptions. ## Limitations - Trained specifically on religious text (Bible translations) - May not perform well on colloquial/everyday Darija speech - Limited vocabulary outside religious domain ## Citation ```bibtex @misc{darija-mms-finetuned, title={MMS-1B-All Fine-tuned on Darija Bible Dataset}, author={HAMMALE}, year={2025}, publisher={Hugging Face}, journal={Hugging Face Model Hub}, howpublished={\url{https://huggingface.co/HAMMALE/mms-darija-finetuned}} } ``` ## Acknowledgments - Original MMS model by Meta AI - Darija Bible dataset by Morocco Bible Society - Audio alignment using Facebook's MMS toolkit
aiivanoff1982/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-long_sharp_skunk
aiivanoff1982
2025-05-24T22:41:41Z
8
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am long sharp skunk", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-06T08:40:02Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-long_sharp_skunk tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am long sharp skunk - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-long_sharp_skunk This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="aiivanoff1982/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-long_sharp_skunk", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/DialoGPT-medium-jared-hendricks-gen1-GGUF
mradermacher
2025-05-24T22:40:42Z
0
0
transformers
[ "transformers", "gguf", "conversational", "en", "base_model:kennethhendricks/DialoGPT-medium-jared-hendricks-gen1", "base_model:quantized:kennethhendricks/DialoGPT-medium-jared-hendricks-gen1", "endpoints_compatible", "region:us" ]
null
2025-05-24T22:37:21Z
--- base_model: kennethhendricks/DialoGPT-medium-jared-hendricks-gen1 language: - en library_name: transformers quantized_by: mradermacher tags: - conversational --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/kennethhendricks/DialoGPT-medium-jared-hendricks-gen1 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.Q2_K.gguf) | Q2_K | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.Q3_K_S.gguf) | Q3_K_S | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.IQ4_XS.gguf) | IQ4_XS | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.Q3_K_L.gguf) | Q3_K_L | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.Q5_K_S.gguf) | Q5_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.Q5_K_M.gguf) | Q5_K_M | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.Q6_K.gguf) | Q6_K | 0.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.Q8_0.gguf) | Q8_0 | 0.5 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-jared-hendricks-gen1-GGUF/resolve/main/DialoGPT-medium-jared-hendricks-gen1.f16.gguf) | f16 | 0.8 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/SSR-Zero-7B-GGUF
mradermacher
2025-05-24T22:38:25Z
0
0
transformers
[ "transformers", "gguf", "en", "zh", "base_model:wjyccs/SSR-Zero-7B", "base_model:quantized:wjyccs/SSR-Zero-7B", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-24T15:10:40Z
--- base_model: wjyccs/SSR-Zero-7B language: - en - zh library_name: transformers license: mit quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/wjyccs/SSR-Zero-7B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/SSR-Zero-7B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/SSR-Zero-7B-GGUF/resolve/main/SSR-Zero-7B.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/SSR-Zero-7B-GGUF/resolve/main/SSR-Zero-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/SSR-Zero-7B-GGUF/resolve/main/SSR-Zero-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/SSR-Zero-7B-GGUF/resolve/main/SSR-Zero-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/SSR-Zero-7B-GGUF/resolve/main/SSR-Zero-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/SSR-Zero-7B-GGUF/resolve/main/SSR-Zero-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SSR-Zero-7B-GGUF/resolve/main/SSR-Zero-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SSR-Zero-7B-GGUF/resolve/main/SSR-Zero-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/SSR-Zero-7B-GGUF/resolve/main/SSR-Zero-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/SSR-Zero-7B-GGUF/resolve/main/SSR-Zero-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/SSR-Zero-7B-GGUF/resolve/main/SSR-Zero-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/SSR-Zero-7B-GGUF/resolve/main/SSR-Zero-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
ApocalypseParty/L3.3-GeneticLemonade-Unleashed-v2.2-70B_4.5bpw-hb6-exl2
ApocalypseParty
2025-05-24T22:36:21Z
1
0
null
[ "safetensors", "llama", "base_model:ApocalypseParty/L3.3-GeneticLemonade-Unleashed-v2.2-70B", "base_model:quantized:ApocalypseParty/L3.3-GeneticLemonade-Unleashed-v2.2-70B", "exl2", "region:us" ]
null
2025-05-10T11:09:22Z
--- base_model: - ApocalypseParty/L3.3-GeneticLemonade-Unleashed-v2.2-70B --- An iterative improvement of Genetic Lemonade Unleashed v2.1 This should be a direct improvement of 2.1. Uses an expanded dataset, but the training method and distribution of content within the dataset remains the same. Compared to v3, this model never went through the DPO training and should have better prose (possibly better creativity too) but worse instruction following. Quants: GGUF: https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.2-70B-i1-GGUF (mradermacher) EXL2 (4.5bpw): https://huggingface.co/ApocalypseParty/L3.3-GeneticLemonade-Unleashed-v2.2-70B_4.5bpw-hb6-exl2
mradermacher/DialoGPT-medium-marvin-i1-GGUF
mradermacher
2025-05-24T22:34:22Z
0
0
transformers
[ "transformers", "gguf", "conversational", "en", "base_model:satkinson/DialoGPT-medium-marvin", "base_model:quantized:satkinson/DialoGPT-medium-marvin", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-05-24T21:28:09Z
--- base_model: satkinson/DialoGPT-medium-marvin language: - en library_name: transformers quantized_by: mradermacher tags: - conversational --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/satkinson/DialoGPT-medium-marvin <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/DialoGPT-medium-marvin-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-marvin-i1-GGUF/resolve/main/DialoGPT-medium-marvin.i1-IQ1_S.gguf) | i1-IQ1_S | 0.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-marvin-i1-GGUF/resolve/main/DialoGPT-medium-marvin.i1-IQ1_M.gguf) | i1-IQ1_M | 0.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-marvin-i1-GGUF/resolve/main/DialoGPT-medium-marvin.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-marvin-i1-GGUF/resolve/main/DialoGPT-medium-marvin.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-marvin-i1-GGUF/resolve/main/DialoGPT-medium-marvin.i1-IQ2_S.gguf) | i1-IQ2_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-marvin-i1-GGUF/resolve/main/DialoGPT-medium-marvin.i1-IQ2_M.gguf) | i1-IQ2_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-marvin-i1-GGUF/resolve/main/DialoGPT-medium-marvin.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.3 | very low quality | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-marvin-i1-GGUF/resolve/main/DialoGPT-medium-marvin.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-marvin-i1-GGUF/resolve/main/DialoGPT-medium-marvin.i1-Q2_K.gguf) | i1-Q2_K | 0.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-marvin-i1-GGUF/resolve/main/DialoGPT-medium-marvin.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-marvin-i1-GGUF/resolve/main/DialoGPT-medium-marvin.i1-IQ3_S.gguf) | i1-IQ3_S | 0.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-marvin-i1-GGUF/resolve/main/DialoGPT-medium-marvin.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-marvin-i1-GGUF/resolve/main/DialoGPT-medium-marvin.i1-IQ3_M.gguf) | i1-IQ3_M | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-marvin-i1-GGUF/resolve/main/DialoGPT-medium-marvin.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.3 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-marvin-i1-GGUF/resolve/main/DialoGPT-medium-marvin.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-marvin-i1-GGUF/resolve/main/DialoGPT-medium-marvin.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.3 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-marvin-i1-GGUF/resolve/main/DialoGPT-medium-marvin.i1-Q4_0.gguf) | i1-Q4_0 | 0.3 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-marvin-i1-GGUF/resolve/main/DialoGPT-medium-marvin.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.3 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-marvin-i1-GGUF/resolve/main/DialoGPT-medium-marvin.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.3 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-marvin-i1-GGUF/resolve/main/DialoGPT-medium-marvin.i1-Q4_1.gguf) | i1-Q4_1 | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-marvin-i1-GGUF/resolve/main/DialoGPT-medium-marvin.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-marvin-i1-GGUF/resolve/main/DialoGPT-medium-marvin.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-marvin-i1-GGUF/resolve/main/DialoGPT-medium-marvin.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/DialoGPT-medium-marvin-i1-GGUF/resolve/main/DialoGPT-medium-marvin.i1-Q6_K.gguf) | i1-Q6_K | 0.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/pythia-1b-deduped-v0-i1-GGUF
mradermacher
2025-05-24T22:34:18Z
0
0
transformers
[ "transformers", "gguf", "pytorch", "causal-lm", "pythia", "pythia_v0", "en", "dataset:EleutherAI/the_pile_deduplicated", "base_model:EleutherAI/pythia-1b-deduped-v0", "base_model:quantized:EleutherAI/pythia-1b-deduped-v0", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-05-24T22:04:39Z
--- base_model: EleutherAI/pythia-1b-deduped-v0 datasets: - EleutherAI/the_pile_deduplicated language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - pytorch - causal-lm - pythia - pythia_v0 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/EleutherAI/pythia-1b-deduped-v0 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/pythia-1b-deduped-v0-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/pythia-1b-deduped-v0-i1-GGUF/resolve/main/pythia-1b-deduped-v0.i1-IQ1_S.gguf) | i1-IQ1_S | 0.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/pythia-1b-deduped-v0-i1-GGUF/resolve/main/pythia-1b-deduped-v0.i1-IQ1_M.gguf) | i1-IQ1_M | 0.4 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/pythia-1b-deduped-v0-i1-GGUF/resolve/main/pythia-1b-deduped-v0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/pythia-1b-deduped-v0-i1-GGUF/resolve/main/pythia-1b-deduped-v0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/pythia-1b-deduped-v0-i1-GGUF/resolve/main/pythia-1b-deduped-v0.i1-IQ2_S.gguf) | i1-IQ2_S | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/pythia-1b-deduped-v0-i1-GGUF/resolve/main/pythia-1b-deduped-v0.i1-IQ2_M.gguf) | i1-IQ2_M | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/pythia-1b-deduped-v0-i1-GGUF/resolve/main/pythia-1b-deduped-v0.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.5 | very low quality | | [GGUF](https://huggingface.co/mradermacher/pythia-1b-deduped-v0-i1-GGUF/resolve/main/pythia-1b-deduped-v0.i1-Q2_K.gguf) | i1-Q2_K | 0.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/pythia-1b-deduped-v0-i1-GGUF/resolve/main/pythia-1b-deduped-v0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/pythia-1b-deduped-v0-i1-GGUF/resolve/main/pythia-1b-deduped-v0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/pythia-1b-deduped-v0-i1-GGUF/resolve/main/pythia-1b-deduped-v0.i1-IQ3_S.gguf) | i1-IQ3_S | 0.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/pythia-1b-deduped-v0-i1-GGUF/resolve/main/pythia-1b-deduped-v0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/pythia-1b-deduped-v0-i1-GGUF/resolve/main/pythia-1b-deduped-v0.i1-IQ3_M.gguf) | i1-IQ3_M | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/pythia-1b-deduped-v0-i1-GGUF/resolve/main/pythia-1b-deduped-v0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.7 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/pythia-1b-deduped-v0-i1-GGUF/resolve/main/pythia-1b-deduped-v0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/pythia-1b-deduped-v0-i1-GGUF/resolve/main/pythia-1b-deduped-v0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/pythia-1b-deduped-v0-i1-GGUF/resolve/main/pythia-1b-deduped-v0.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.7 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/pythia-1b-deduped-v0-i1-GGUF/resolve/main/pythia-1b-deduped-v0.i1-Q4_0.gguf) | i1-Q4_0 | 0.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/pythia-1b-deduped-v0-i1-GGUF/resolve/main/pythia-1b-deduped-v0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/pythia-1b-deduped-v0-i1-GGUF/resolve/main/pythia-1b-deduped-v0.i1-Q4_1.gguf) | i1-Q4_1 | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/pythia-1b-deduped-v0-i1-GGUF/resolve/main/pythia-1b-deduped-v0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/pythia-1b-deduped-v0-i1-GGUF/resolve/main/pythia-1b-deduped-v0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/pythia-1b-deduped-v0-i1-GGUF/resolve/main/pythia-1b-deduped-v0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/pythia-1b-deduped-v0-i1-GGUF/resolve/main/pythia-1b-deduped-v0.i1-Q6_K.gguf) | i1-Q6_K | 0.9 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Antonioul/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-deadly_squeaky_moose
Antonioul
2025-05-24T22:33:40Z
15
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am deadly squeaky moose", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T05:29:18Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-deadly_squeaky_moose tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am deadly squeaky moose - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-deadly_squeaky_moose This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Antonioul/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-deadly_squeaky_moose", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
blackbarry33/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-whiskered_grunting_gerbil
blackbarry33
2025-05-24T22:32:59Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am whiskered grunting gerbil", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-13T21:06:41Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-whiskered_grunting_gerbil tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am whiskered grunting gerbil - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-whiskered_grunting_gerbil This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="blackbarry33/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-whiskered_grunting_gerbil", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
BhurchandiMandar/AIRM_Qwen_7B
BhurchandiMandar
2025-05-24T22:32:57Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "region:us" ]
null
2025-05-24T22:31:49Z
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
J-LAB/fluxiia_14b
J-LAB
2025-05-24T22:32:17Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Qwen2.5-14B-Instruct-unsloth-bnb-4bit", "base_model:finetune:unsloth/Qwen2.5-14B-Instruct-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-24T21:36:18Z
--- base_model: unsloth/Qwen2.5-14B-Instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** J-LAB - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-14B-Instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
akunskripsiapillv1/finetuned-unichart-indochart-v2
akunskripsiapillv1
2025-05-24T22:32:08Z
0
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "image-text-to-text", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-05-24T22:31:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mattyamonaca/fpack_1fmc_bg_lora
mattyamonaca
2025-05-24T22:31:40Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-24T22:02:09Z
--- license: apache-2.0 ---
Ludiya/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-roaring_vicious_impala
Ludiya
2025-05-24T22:31:16Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am roaring vicious impala", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-13T14:09:03Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-roaring_vicious_impala tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am roaring vicious impala - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-roaring_vicious_impala This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Ludiya/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-roaring_vicious_impala", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
bruhzair/prototype-0.4c
bruhzair
2025-05-24T22:23:53Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2403.19522", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-24T22:06:55Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # prototype-0.4c This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using /workspace/cache/models--huihui-ai--DeepSeek-R1-Distill-Llama-70B-abliterated/snapshots/116ff0fa55425b094a38a6bbf6faf2f5cafea335 as a base. ### Models Merged The following models were included in the merge: * /workspace/prototype-0.3 * /workspace/prototype-0.2--lazy-unpickle * /workspace/prototype-0.1--lazy-unpickle ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /workspace/prototype-0.3 - model: /workspace/prototype-0.2--lazy-unpickle - model: /workspace/prototype-0.1--lazy-unpickle - model: /workspace/cache/models--huihui-ai--DeepSeek-R1-Distill-Llama-70B-abliterated/snapshots/116ff0fa55425b094a38a6bbf6faf2f5cafea335 base_model: /workspace/cache/models--huihui-ai--DeepSeek-R1-Distill-Llama-70B-abliterated/snapshots/116ff0fa55425b094a38a6bbf6faf2f5cafea335 merge_method: model_stock tokenizer: source: union int8_mask: true dtype: float32 out_dtype: bfloat16 ```