modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-05 12:28:32
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
468 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-05 12:27:45
card
stringlengths
11
1.01M
p2g2ads3/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-placid_timid_cheetah
p2g2ads3
2025-06-05T11:06:15Z
16
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am placid timid cheetah", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-16T21:17:10Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-placid_timid_cheetah tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am placid timid cheetah - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-placid_timid_cheetah This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="p2g2ads3/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-placid_timid_cheetah", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Ducnm2512/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-exotic_flapping_mallard
Ducnm2512
2025-06-05T11:06:14Z
31
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am exotic flapping mallard", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T05:19:56Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-exotic_flapping_mallard tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am exotic flapping mallard - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-exotic_flapping_mallard This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Ducnm2512/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-exotic_flapping_mallard", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.0 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/L3-Aethora-15B-GGUF
mradermacher
2025-06-05T11:06:03Z
96
9
transformers
[ "transformers", "gguf", "llama-factory", "en", "dataset:TheSkullery/Aether-Lite-V1.2", "base_model:SteelStorage/L3-Aethora-15B", "base_model:quantized:SteelStorage/L3-Aethora-15B", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
null
2024-06-07T10:04:01Z
--- base_model: SteelStorage/L3-Aethora-15B datasets: - TheSkullery/Aether-Lite-V1.2 language: - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - llama-factory --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/SteelStorage/L3-Aethora-15B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-Aethora-15B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.Q2_K.gguf) | Q2_K | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.IQ3_XS.gguf) | IQ3_XS | 6.5 | | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.Q3_K_S.gguf) | Q3_K_S | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.IQ3_S.gguf) | IQ3_S | 6.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.IQ3_M.gguf) | IQ3_M | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.Q3_K_M.gguf) | Q3_K_M | 7.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.Q3_K_L.gguf) | Q3_K_L | 8.1 | | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.IQ4_XS.gguf) | IQ4_XS | 8.4 | | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.Q4_K_M.gguf) | Q4_K_M | 9.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.Q5_K_S.gguf) | Q5_K_S | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.Q5_K_M.gguf) | Q5_K_M | 10.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.Q6_K.gguf) | Q6_K | 12.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.Q8_0.gguf) | Q8_0 | 16.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
pavlodp/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-exotic_pawing_wombat
pavlodp
2025-06-05T11:05:33Z
12
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am exotic pawing wombat", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-11T04:14:53Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-exotic_pawing_wombat tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am exotic pawing wombat - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-exotic_pawing_wombat This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="pavlodp/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-exotic_pawing_wombat", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mcryptoone/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nocturnal_howling_barracuda
mcryptoone
2025-06-05T11:05:25Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am nocturnal howling barracuda", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-01T08:56:51Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nocturnal_howling_barracuda tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am nocturnal howling barracuda - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nocturnal_howling_barracuda This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="mcryptoone/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nocturnal_howling_barracuda", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
amanfor18/Ananya
amanfor18
2025-06-05T11:05:09Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:unknown", "region:us" ]
text-to-image
2025-06-05T10:40:24Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- AnanyaPandayFlux output: url: https://cdn-uploads.huggingface.co/production/uploads/66ae0bdde990511973ba208a/pKtX26UrTYIGTzunZp1Yv.webp base_model: black-forest-labs/FLUX.1-dev instance_prompt: AnanyaPandayFlux license: unknown --- # AnanyaPandey <Gallery /> ## Trigger words You should use `AnanyaPandayFlux` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/amanfor18/Ananya/tree/main) them in the Files & versions tab.
energybubu/ir-final-baseline
energybubu
2025-06-05T11:05:01Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-05T11:04:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Millings/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sedate_jagged_grouse
Millings
2025-06-05T11:04:33Z
61
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am sedate jagged grouse", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T13:26:43Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sedate_jagged_grouse tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am sedate jagged grouse - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sedate_jagged_grouse This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Millings/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sedate_jagged_grouse", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
elipser/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vigilant_miniature_iguana
elipser
2025-06-05T11:04:03Z
25
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am vigilant miniature iguana", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-29T11:59:50Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vigilant_miniature_iguana tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am vigilant miniature iguana - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vigilant_miniature_iguana This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="elipser/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vigilant_miniature_iguana", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
haedahae/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-strong_grunting_rat
haedahae
2025-06-05T11:03:56Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am strong grunting rat", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-04T06:29:22Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-strong_grunting_rat tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am strong grunting rat - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-strong_grunting_rat This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="haedahae/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-strong_grunting_rat", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
colabmafari/fix_bug_model
colabmafari
2025-06-05T11:03:08Z
139
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-03-21T16:26:31Z
--- library_name: transformers license: apache-2.0 base_model: facebook/wav2vec2-base tags: - generated_from_trainer metrics: - wer model-index: - name: fix_bug_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fix_bug_model This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 5.2956 - Wer: 0.9567 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.0257 | 1.0 | 250 | 3.2299 | 1.0 | | 2.943 | 2.0 | 500 | 3.0493 | 1.0 | | 1.2165 | 3.0 | 750 | 2.8482 | 0.9788 | | 0.6684 | 4.0 | 1000 | 2.8279 | 0.9715 | | 0.4479 | 5.0 | 1250 | 3.0872 | 0.9564 | | 0.3042 | 6.0 | 1500 | 3.2243 | 0.9585 | | 0.2324 | 7.0 | 1750 | 3.3387 | 0.9511 | | 0.1749 | 8.0 | 2000 | 3.2629 | 0.9682 | | 0.1423 | 9.0 | 2250 | 3.6109 | 0.9625 | | 0.124 | 10.0 | 2500 | 3.5448 | 0.9605 | | 0.1087 | 11.0 | 2750 | 3.7534 | 0.9609 | | 0.0857 | 12.0 | 3000 | 4.3139 | 0.9621 | | 0.0788 | 13.0 | 3250 | 4.2074 | 0.9617 | | 0.0761 | 14.0 | 3500 | 4.4329 | 0.9723 | | 0.0661 | 15.0 | 3750 | 4.6417 | 0.9593 | | 0.0552 | 16.0 | 4000 | 4.6430 | 0.9723 | | 0.0512 | 17.0 | 4250 | 4.8636 | 0.9760 | | 0.044 | 18.0 | 4500 | 4.8792 | 0.9568 | | 0.0388 | 19.0 | 4750 | 5.1738 | 0.9658 | | 0.0405 | 20.0 | 5000 | 5.0272 | 0.9580 | | 0.0339 | 21.0 | 5250 | 5.2478 | 0.9646 | | 0.0403 | 22.0 | 5500 | 5.1726 | 0.9633 | | 0.0285 | 23.0 | 5750 | 5.0249 | 0.9703 | | 0.0297 | 24.0 | 6000 | 5.0761 | 0.9695 | | 0.024 | 25.0 | 6250 | 5.2185 | 0.9682 | | 0.029 | 26.0 | 6500 | 5.1289 | 0.9715 | | 0.0267 | 27.0 | 6750 | 5.3603 | 0.9678 | | 0.0219 | 28.0 | 7000 | 5.4188 | 0.9703 | | 0.0213 | 29.0 | 7250 | 5.4115 | 0.9727 | | 0.0219 | 30.0 | 7500 | 5.3120 | 0.9690 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.6.0+cu124 - Datasets 2.14.5 - Tokenizers 0.20.3
maki28/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fleecy_gilded_swan
maki28
2025-06-05T11:02:50Z
43
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am fleecy gilded swan", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-27T16:37:36Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fleecy_gilded_swan tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am fleecy gilded swan - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fleecy_gilded_swan This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="maki28/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fleecy_gilded_swan", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
hamid1232/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-yapping_giant_lizard
hamid1232
2025-06-05T11:02:37Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am yapping giant lizard", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-11T23:22:12Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-yapping_giant_lizard tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am yapping giant lizard - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-yapping_giant_lizard This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="hamid1232/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-yapping_giant_lizard", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
payal-gaming/Watch.Video.18.payal.gaming.viral.video.viral.mms.payal.gaming
payal-gaming
2025-06-05T11:02:22Z
0
0
null
[ "region:us" ]
null
2025-06-05T10:55:12Z
[๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐‹๐ข๐ง๐ค )](https://videohere.top/?payal-gaming) [โ–บโœ… ๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ ๐™๐™ช๐™ก๐™ก ๐™‘๐™ž๐™™๐™š๐™คโค๏ธโค๏ธโฌ‡๏ธโฌ‡๏ธโ€‹](https://videohere.top/?payal-gaming) [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?payal-gaming)
payal-gaming/FULL.VIDEO.LINK.payal.gaming.Viral.Video.Leaks.Official
payal-gaming
2025-06-05T11:02:15Z
0
0
null
[ "region:us" ]
null
2025-06-05T10:56:32Z
[๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐‹๐ข๐ง๐ค )](https://videohere.top/?payal-gaming) [โ–บโœ… ๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ ๐™๐™ช๐™ก๐™ก ๐™‘๐™ž๐™™๐™š๐™คโค๏ธโค๏ธโฌ‡๏ธโฌ‡๏ธโ€‹](https://videohere.top/?payal-gaming) [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?payal-gaming)
Antonwen/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pale_wary_bear
Antonwen
2025-06-05T11:02:13Z
45
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am pale wary bear", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T05:30:34Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pale_wary_bear tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am pale wary bear - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pale_wary_bear This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Antonwen/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pale_wary_bear", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MaxVell337/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flapping_foraging_walrus
MaxVell337
2025-06-05T11:01:10Z
34
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am flapping foraging walrus", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T18:14:09Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flapping_foraging_walrus tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am flapping foraging walrus - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flapping_foraging_walrus This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="MaxVell337/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flapping_foraging_walrus", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
bhavinjawade/Jun2-Gemma-27b-tq_sft_finetuned-model-o1-augmented
bhavinjawade
2025-06-05T11:00:44Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-27b-it", "base_model:finetune:google/gemma-3-27b-it", "endpoints_compatible", "region:us" ]
null
2025-06-03T05:19:26Z
--- base_model: google/gemma-3-27b-it library_name: transformers model_name: Jun2-Gemma-27b-tq_sft_finetuned-model-o1-augmented tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for Jun2-Gemma-27b-tq_sft_finetuned-model-o1-augmented This model is a fine-tuned version of [google/gemma-3-27b-it](https://huggingface.co/google/gemma-3-27b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="bhavinjawade/Jun2-Gemma-27b-tq_sft_finetuned-model-o1-augmented", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.50.0.dev0 - Pytorch: 2.6.0+cu124 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
haedahae/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-beaked_stealthy_chimpanzee
haedahae
2025-06-05T11:00:35Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am beaked stealthy chimpanzee", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-08T07:26:54Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-beaked_stealthy_chimpanzee tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am beaked stealthy chimpanzee - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-beaked_stealthy_chimpanzee This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="haedahae/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-beaked_stealthy_chimpanzee", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
tiktak666/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-twitchy_darting_chimpanzee
tiktak666
2025-06-05T11:00:15Z
21
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am twitchy darting chimpanzee", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-20T10:32:19Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-twitchy_darting_chimpanzee tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am twitchy darting chimpanzee - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-twitchy_darting_chimpanzee This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="tiktak666/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-twitchy_darting_chimpanzee", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Contenidoscall/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_twitchy_cougar
Contenidoscall
2025-06-05T10:58:58Z
36
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am bold twitchy cougar", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-10T08:32:15Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_twitchy_cougar tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am bold twitchy cougar - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_twitchy_cougar This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Contenidoscall/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_twitchy_cougar", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
AddLee/gemma-3-1b-finetune
AddLee
2025-06-05T10:58:55Z
0
0
null
[ "safetensors", "qwen3", "unsloth", "trl", "sft", "license:mit", "4-bit", "bitsandbytes", "region:us" ]
null
2025-06-05T09:05:29Z
--- license: mit tags: - unsloth - trl - sft ---
Zeniang/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sprightly_gregarious_barracuda
Zeniang
2025-06-05T10:58:11Z
46
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am sprightly gregarious barracuda", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-06T21:13:15Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sprightly_gregarious_barracuda tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am sprightly gregarious barracuda - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sprightly_gregarious_barracuda This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Zeniang/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sprightly_gregarious_barracuda", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mina5rovic/qwen3-0.6b-mcqa-quant-w8a8-matyabase
mina5rovic
2025-06-05T10:58:08Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "compressed-tensors", "region:us" ]
text-generation
2025-06-05T10:57:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mojitocup/realistic-xl-2
mojitocup
2025-06-05T10:57:52Z
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "diffusers:StableDiffusion3Pipeline", "region:us" ]
text-to-image
2025-06-05T10:26:48Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿงจ diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PeanutCoding/Donuttest
PeanutCoding
2025-06-05T10:57:31Z
3
0
transformers
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "base_model:naver-clova-ix/donut-base", "base_model:finetune:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-05-30T14:56:47Z
--- library_name: transformers license: mit base_model: naver-clova-ix/donut-base tags: - generated_from_trainer datasets: - imagefolder model-index: - name: Donuttest results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Donuttest This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 6 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.52.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
Zagrodnik/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_huge_mole
Zagrodnik
2025-06-05T10:56:11Z
32
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am nasty huge mole", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T18:30:41Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_huge_mole tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am nasty huge mole - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_huge_mole This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Zagrodnik/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_huge_mole", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Maori999/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tiny_shrewd_alligator
Maori999
2025-06-05T10:54:47Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am tiny shrewd alligator", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-09T07:56:17Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tiny_shrewd_alligator tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am tiny shrewd alligator - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tiny_shrewd_alligator This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Maori999/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tiny_shrewd_alligator", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
kowndinya23/ultrafeedback_binarized-tulu-150K-llama-3-3b-1-epochs-alpha-0.6-beta-0.4-2-epochs
kowndinya23
2025-06-05T10:54:16Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:trl-lib/ultrafeedback_binarized", "arxiv:2305.18290", "base_model:kowndinya23/tulu-v2-sft-mixture-150K-llama-3-3b-1-epochs-alpha-0.6-beta-0.4", "base_model:finetune:kowndinya23/tulu-v2-sft-mixture-150K-llama-3-3b-1-epochs-alpha-0.6-beta-0.4", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-05T08:57:56Z
--- base_model: kowndinya23/tulu-v2-sft-mixture-150K-llama-3-3b-1-epochs-alpha-0.6-beta-0.4 datasets: trl-lib/ultrafeedback_binarized library_name: transformers model_name: ultrafeedback_binarized-tulu-150K-llama-3-3b-1-epochs-alpha-0.6-beta-0.4-2-epochs tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for ultrafeedback_binarized-tulu-150K-llama-3-3b-1-epochs-alpha-0.6-beta-0.4-2-epochs This model is a fine-tuned version of [kowndinya23/tulu-v2-sft-mixture-150K-llama-3-3b-1-epochs-alpha-0.6-beta-0.4](https://huggingface.co/kowndinya23/tulu-v2-sft-mixture-150K-llama-3-3b-1-epochs-alpha-0.6-beta-0.4) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="kowndinya23/ultrafeedback_binarized-tulu-150K-llama-3-3b-1-epochs-alpha-0.6-beta-0.4-2-epochs", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://adobesensei.wandb.io/hrenduchinta/huggingface/runs/vj19d5ou) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MERaLiON/MERaLiON-2-10B-ASR
MERaLiON
2025-06-05T10:54:06Z
116
3
transformers
[ "transformers", "safetensors", "meralion2", "automatic-speech-recognition", "meralion", "meralion-2", "custom_code", "en", "zh", "ms", "ta", "id", "th", "vi", "dataset:MERaLiON/Multitask-National-Speech-Corpus-v1", "arxiv:2412.09818", "arxiv:2501.01034", "arxiv:2409.06635", "base_model:google/gemma-2-9b-it", "base_model:finetune:google/gemma-2-9b-it", "license:other", "region:us" ]
automatic-speech-recognition
2025-05-25T13:18:25Z
--- license: other datasets: - MERaLiON/Multitask-National-Speech-Corpus-v1 language: - en - zh - ms - ta - id - th - vi metrics: - wer - bleu base_model: - openai/whisper-large-v3 - google/gemma-2-9b-it library_name: transformers tags: - meralion - meralion-2 --- <h1 align="center">๐Ÿ”ฅ MERaLiON-2 ๐Ÿ”ฅ</h1> <p align="center"> <a href="https://huggingface.co/MERaLiON/MERaLiON-2-10B">๐Ÿš€ MERaLiON-2-10B</a> | <a href="https://huggingface.co/MERaLiON/MERaLiON-2-10B-ASR">๐Ÿš€ MERaLiON-2-10B-ASR</a> | <a href="https://huggingface.co/MERaLiON/MERaLiON-2-3B">๐Ÿš€ MERaLiON-2-3B</a> | <a href="https://meralion.org/demo/">๐Ÿ’ป Web Demo</a> </p> ## Introduction We are pleased to announce the release of **MERaLiON2**, the latest addition to the MERaLiON family of speech-text large language models. Our flagship model, [**MERaLiON-2-10B**](https://huggingface.co/MERaLiON/MERaLiON-2-10B), demonstrates competitive performance across benchmark evaluations in tasks such as multilingual automatic speech recognition (ASR), speech translation (ST), audio scene understanding, emotion recognition, and general speech comprehension. These results are comparable to those achieved by other state-of-the-art open-source AudioLLMs, including Qwen2.5-Omni-7B and Phi-4-multimodal-instruct. MERaLiON-2-10B is specifically designed to follow complex instructions with a nuanced understanding of **Singaporeโ€™s multilingual and multicultural context**. It integrates a localized Whisper-large-v3 speech encoder and Gemma-2-9b text decoder. The following graph presents task-specific evaluation scores, assessed using the **LLM-as-a-Judge** framework across multiple datasets. For the speech translation task, performance is measured using the BLEU metric, where higher scores indicate better translation quality. <img src="radar_task.png" alt="model_capability" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> In addition, we introduce an ASR-optimized variant, [**MERaLiON-2-10B-ASR**](https://huggingface.co/MERaLiON/MERaLiON-2-10B-ASR), which delivers a **5โ€“30%** performance improvement over OpenAIโ€™s `whisper-large-v3` on speech recognition tasks. This enhancement spans Singaporeโ€™s 4 official languagesโ€”**English**, **Mandarin**, **Malay**, and **Tamil**โ€”as well as 3 South-East Asian languages: **Indonesian**, **Thai**, and **Vietnamese**. The model also demonstrates robust handling of **code-switching scenarios** and local colloquialisms, reflecting its adaptability to Singaporeโ€™s diverse linguistic landscape. The following visualization illustrates the **1 - Word Error Rate (WER)** metric across these seven languages, comparing MERaLiON-2-10B-ASR with other leading models. A higher value indicates better transcription accuracy. <img src="radar_asr.png" alt="model_capability" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> We also provide [MERaLiON-2-3B](https://huggingface.co/MERaLiON/MERaLiON-2-3B) that balances performance with reduced computational requirements, enabling broader accessibility and lightweight deployment. - **Extended Audio Length**: Support audio inputs up to 300 seconds (5 minutes) for audio & speech question answering tasks, **30s for a satisfactory performance for speech transcription (ASR) and speech translation (ST) tasks**. - **Expanded Language Coverage**: In addition to English, Chinese, and Singlish, V2 introduces support for Malay, Tamil, and other South-East Asia languages including Indonesian, Thai, and Vietnamese. - **Improved Performance**: Achieves higher performance across a wide range of tasks. See the [Evaluation](#performance) section for detailed benchmarks. - **Higher Quality Training Data**: Trained on 120,000 hours of curated speech and audio data, filtered for quality and diversity, with an emphasis on local and multilingual audio sources. - **Three Model Variants**: Available in general-purpose ([MERaLiON-2-10B](https://huggingface.co/MERaLiON/MERaLiON-2-10B)), ASR-optimized ([MERaLiON-2-10B-ASR](https://huggingface.co/MERaLiON/MERaLiON-2-10B-ASR)) and light-weight ([MERaLiON-2-3B](https://huggingface.co/MERaLiON/MERaLiON-2-3B)) configurations to balance latency, compute efficiency, and task performance across different deployment needs. ## Model Description: MERaLiON stands for **M**ultimodal **E**mpathetic **R**easoning **a**nd **L**earning **i**n **O**ne **N**etwork. MERaLiON-2 is a family of Speech-Text Large Language Models tailored for **Singaporeโ€™s multilingual and multicultural landscape**, as well as the wider **Southeast Asian region**. The 10B model integrates a localized [Whisper-Large-V3](https://huggingface.co/openai/whisper-large-v3) speech encoder with the [Gemma2-9b-IT](https://huggingface.co/google/gemma-2-9b-it) text decoder. The 3B model integrates a localized [Whisper-Large-V3](https://huggingface.co/openai/whisper-large-v3) speech encoder with the [Gemma2-2b-IT](https://huggingface.co/google/gemma-2-2b-it) text decoder. MERaLiON-2-10B is finetuned on **120,000 hours of speech and audio data** across **6 diverse tasks**: Automatic Speech Recognition (ASR), Spoken Question Answering (SQA), Spoken Dialogue Summarization (SDS), Audio Captioning (AC), Audio-Scene Question Answering (ASQA) and Paralinguistic Question Answering (PQA). The model supports long-form audio inputs of up to 300 seconds (5 minutes) and is specifically adapted to handle the linguistic nuances, accents, and dialects commonly found across Singapore and neighboring countries. - **Developed by:** I<sup>2</sup>R, A\*STAR, Singapore - **Model type:** Multimodal LLM - **Language(s):** Primarily English (Global and Singapore), Chinese, with support for audio of regional languages including Malay, Tamil, Indonesian, Thai, and Vietnamese. - **Audio:** **Mono** channel audio, **16000** hz, up to **300** seconds. - **License:** [MERaLiON Public License](MERaLiON-Public-Licence-v2.pdf) - **Demo:** [MERaLiON-AudioLLM Web Demo](https://meralion.org/demo/) **MERaLiON-2** is an upgraded version of [MERaLiON-AudioLLM](https://huggingface.co/MERaLiON/MERaLiON-AudioLLM-Whisper-SEA-LION). ## Performance: We benchmark MERaLiON-2 series models with extended [AudioBench benchmark](https://huggingface.co/spaces/MERaLiON/AudioBench-Leaderboard) againstย several recently released open-source multimodal models โ€” SALMONN-7B, Qwen2.5-Omni series and Phi-4-Multimodal โ€” as well as two cascade model. **Better Automatic Speech Recognition (ASR) Accuracy** MERaLiON-2-10B-ASR and MERaLiON-2-10B demonstrate leading performance in Singlish, Mandarin, Malay, Tamil, and other Southeast Asian languages, while maintaining competitive results in English compared to `Whisper-large-v3`. The following table shows the average transcription `Word Error Rate` by language for the MERaLiON family and other leading AudioLLMs. The `Private Dataset` includes a collection of Singapore's locally accented speeches with code-switch. Please visit [AudioBench benchmark](https://huggingface.co/spaces/MERaLiON/AudioBench-Leaderboard) for dataset-level evaluation results. <style type="text/css"> #T_0910c th { text-align: center; } #T_0910c_row0_col0, #T_0910c_row1_col0, #T_0910c_row2_col0, #T_0910c_row3_col0, #T_0910c_row4_col0, #T_0910c_row5_col0, #T_0910c_row6_col7, #T_0910c_row7_col0, #T_0910c_row8_col0 { font-weight: bold; text-decoration: underline; text-align: center; } #T_0910c_row0_col1, #T_0910c_row1_col1, #T_0910c_row2_col1, #T_0910c_row3_col1, #T_0910c_row4_col1, #T_0910c_row5_col1, #T_0910c_row6_col1, #T_0910c_row7_col1, #T_0910c_row8_col1 { text-align: center; } #T_0910c_row0_col2, #T_0910c_row0_col3, #T_0910c_row0_col4, #T_0910c_row0_col5, #T_0910c_row0_col6, #T_0910c_row0_col7, #T_0910c_row0_col8, #T_0910c_row0_col9, #T_0910c_row0_col10, #T_0910c_row0_col11, #T_0910c_row1_col2, #T_0910c_row1_col3, #T_0910c_row1_col4, #T_0910c_row1_col5, #T_0910c_row1_col6, #T_0910c_row1_col7, #T_0910c_row1_col8, #T_0910c_row1_col9, #T_0910c_row1_col10, #T_0910c_row1_col11, #T_0910c_row2_col2, #T_0910c_row2_col3, #T_0910c_row2_col4, #T_0910c_row2_col5, #T_0910c_row2_col6, #T_0910c_row2_col7, #T_0910c_row2_col8, #T_0910c_row2_col9, #T_0910c_row2_col10, #T_0910c_row2_col11, #T_0910c_row3_col2, #T_0910c_row3_col3, #T_0910c_row3_col4, #T_0910c_row3_col5, #T_0910c_row3_col6, #T_0910c_row3_col7, #T_0910c_row3_col8, #T_0910c_row3_col9, #T_0910c_row3_col10, #T_0910c_row3_col11, #T_0910c_row4_col2, #T_0910c_row4_col3, #T_0910c_row4_col4, #T_0910c_row4_col5, #T_0910c_row4_col6, #T_0910c_row4_col7, #T_0910c_row4_col8, #T_0910c_row4_col9, #T_0910c_row4_col10, #T_0910c_row4_col11, #T_0910c_row5_col2, #T_0910c_row5_col3, #T_0910c_row5_col4, #T_0910c_row5_col5, #T_0910c_row5_col6, #T_0910c_row5_col7, #T_0910c_row5_col8, #T_0910c_row5_col9, #T_0910c_row5_col10, #T_0910c_row5_col11, #T_0910c_row6_col0, #T_0910c_row6_col2, #T_0910c_row6_col3, #T_0910c_row6_col4, #T_0910c_row6_col5, #T_0910c_row6_col6, #T_0910c_row6_col8, #T_0910c_row6_col9, #T_0910c_row6_col10, #T_0910c_row6_col11, #T_0910c_row7_col2, #T_0910c_row7_col3, #T_0910c_row7_col4, #T_0910c_row7_col5, #T_0910c_row7_col6, #T_0910c_row7_col7, #T_0910c_row7_col8, #T_0910c_row7_col9, #T_0910c_row7_col10, #T_0910c_row7_col11, #T_0910c_row8_col2, #T_0910c_row8_col3, #T_0910c_row8_col4, #T_0910c_row8_col5, #T_0910c_row8_col6, #T_0910c_row8_col7, #T_0910c_row8_col8, #T_0910c_row8_col9, #T_0910c_row8_col10, #T_0910c_row8_col11 { text-align: center; } </style> <table id="T_0910c"> <thead> <tr> <th class="blank level0" >&nbsp;</th> <th id="T_0910c_level0_col0" class="col_heading level0 col0" >MERaLiON-2-10B-ASR</th> <th id="T_0910c_level0_col1" class="col_heading level0 col1" >MERaLiON-2-10B</th> <th id="T_0910c_level0_col2" class="col_heading level0 col2" >MERaLiON-2-3B</th> <th id="T_0910c_level0_col3" class="col_heading level0 col3" >whisper_large_v3</th> <th id="T_0910c_level0_col4" class="col_heading level0 col4" >cascade-whisper_large_v3-llama_3_8b_instruct</th> <th id="T_0910c_level0_col5" class="col_heading level0 col5" >cascade-whisper_large_v2-gemma2_9b_cpt-sea_lionv3_instruct</th> <th id="T_0910c_level0_col6" class="col_heading level0 col6" >MERaLiON-AudioLLM-Whisper-SEA-LION</th> <th id="T_0910c_level0_col7" class="col_heading level0 col7" >Qwen2.5-Omni-7B</th> <th id="T_0910c_level0_col8" class="col_heading level0 col8" >SeaLLMs-Audio-7B</th> <th id="T_0910c_level0_col9" class="col_heading level0 col9" >Qwen2.5-Omni-3B</th> <th id="T_0910c_level0_col10" class="col_heading level0 col10" >SALMONN_7B</th> <th id="T_0910c_level0_col11" class="col_heading level0 col11" >phi_4_multimodal_instruct</th> </tr> </thead> <tbody> <tr> <th id="T_0910c_level0_row0" class="row_heading level0 row0" >Thai</th> <td id="T_0910c_row0_col0" class="data row0 col0" >0.096526</td> <td id="T_0910c_row0_col1" class="data row0 col1" >0.109365</td> <td id="T_0910c_row0_col2" class="data row0 col2" >0.107279</td> <td id="T_0910c_row0_col3" class="data row0 col3" >0.121073</td> <td id="T_0910c_row0_col4" class="data row0 col4" >0.120257</td> <td id="T_0910c_row0_col5" class="data row0 col5" >0.172105</td> <td id="T_0910c_row0_col6" class="data row0 col6" >0.919330</td> <td id="T_0910c_row0_col7" class="data row0 col7" >0.126497</td> <td id="T_0910c_row0_col8" class="data row0 col8" >0.117152</td> <td id="T_0910c_row0_col9" class="data row0 col9" >0.163150</td> <td id="T_0910c_row0_col10" class="data row0 col10" >1.191099</td> <td id="T_0910c_row0_col11" class="data row0 col11" >1.510068</td> </tr> <tr> <th id="T_0910c_level0_row1" class="row_heading level0 row1" >Tamil</th> <td id="T_0910c_row1_col0" class="data row1 col0" >0.271279</td> <td id="T_0910c_row1_col1" class="data row1 col1" >0.327081</td> <td id="T_0910c_row1_col2" class="data row1 col2" >0.344081</td> <td id="T_0910c_row1_col3" class="data row1 col3" >0.441483</td> <td id="T_0910c_row1_col4" class="data row1 col4" >0.475225</td> <td id="T_0910c_row1_col5" class="data row1 col5" >0.492336</td> <td id="T_0910c_row1_col6" class="data row1 col6" >0.561315</td> <td id="T_0910c_row1_col7" class="data row1 col7" >1.024916</td> <td id="T_0910c_row1_col8" class="data row1 col8" >2.325402</td> <td id="T_0910c_row1_col9" class="data row1 col9" >1.315143</td> <td id="T_0910c_row1_col10" class="data row1 col10" >1.306694</td> <td id="T_0910c_row1_col11" class="data row1 col11" >1.876722</td> </tr> <tr> <th id="T_0910c_level0_row2" class="row_heading level0 row2" >Singlish</th> <td id="T_0910c_row2_col0" class="data row2 col0" >0.129830</td> <td id="T_0910c_row2_col1" class="data row2 col1" >0.168813</td> <td id="T_0910c_row2_col2" class="data row2 col2" >0.180395</td> <td id="T_0910c_row2_col3" class="data row2 col3" >0.248945</td> <td id="T_0910c_row2_col4" class="data row2 col4" >0.251608</td> <td id="T_0910c_row2_col5" class="data row2 col5" >0.255717</td> <td id="T_0910c_row2_col6" class="data row2 col6" >0.143800</td> <td id="T_0910c_row2_col7" class="data row2 col7" >0.439071</td> <td id="T_0910c_row2_col8" class="data row2 col8" >0.795990</td> <td id="T_0910c_row2_col9" class="data row2 col9" >0.389393</td> <td id="T_0910c_row2_col10" class="data row2 col10" >0.441490</td> <td id="T_0910c_row2_col11" class="data row2 col11" >0.448863</td> </tr> <tr> <th id="T_0910c_level0_row3" class="row_heading level0 row3" >Malay</th> <td id="T_0910c_row3_col0" class="data row3 col0" >0.194638</td> <td id="T_0910c_row3_col1" class="data row3 col1" >0.209074</td> <td id="T_0910c_row3_col2" class="data row3 col2" >0.279891</td> <td id="T_0910c_row3_col3" class="data row3 col3" >0.219692</td> <td id="T_0910c_row3_col4" class="data row3 col4" >0.311921</td> <td id="T_0910c_row3_col5" class="data row3 col5" >0.314378</td> <td id="T_0910c_row3_col6" class="data row3 col6" >0.289895</td> <td id="T_0910c_row3_col7" class="data row3 col7" >1.460664</td> <td id="T_0910c_row3_col8" class="data row3 col8" >0.765565</td> <td id="T_0910c_row3_col9" class="data row3 col9" >2.943750</td> <td id="T_0910c_row3_col10" class="data row3 col10" >1.085867</td> <td id="T_0910c_row3_col11" class="data row3 col11" >3.762933</td> </tr> <tr> <th id="T_0910c_level0_row4" class="row_heading level0 row4" >English</th> <td id="T_0910c_row4_col0" class="data row4 col0" >0.078544</td> <td id="T_0910c_row4_col1" class="data row4 col1" >0.088259</td> <td id="T_0910c_row4_col2" class="data row4 col2" >0.122295</td> <td id="T_0910c_row4_col3" class="data row4 col3" >0.080841</td> <td id="T_0910c_row4_col4" class="data row4 col4" >0.081568</td> <td id="T_0910c_row4_col5" class="data row4 col5" >0.104830</td> <td id="T_0910c_row4_col6" class="data row4 col6" >0.110567</td> <td id="T_0910c_row4_col7" class="data row4 col7" >0.134216</td> <td id="T_0910c_row4_col8" class="data row4 col8" >0.197824</td> <td id="T_0910c_row4_col9" class="data row4 col9" >0.110353</td> <td id="T_0910c_row4_col10" class="data row4 col10" >0.191492</td> <td id="T_0910c_row4_col11" class="data row4 col11" >0.098225</td> </tr> <tr> <th id="T_0910c_level0_row5" class="row_heading level0 row5" >Indonesian</th> <td id="T_0910c_row5_col0" class="data row5 col0" >0.121020</td> <td id="T_0910c_row5_col1" class="data row5 col1" >0.142813</td> <td id="T_0910c_row5_col2" class="data row5 col2" >0.131950</td> <td id="T_0910c_row5_col3" class="data row5 col3" >0.137102</td> <td id="T_0910c_row5_col4" class="data row5 col4" >0.135390</td> <td id="T_0910c_row5_col5" class="data row5 col5" >0.159476</td> <td id="T_0910c_row5_col6" class="data row5 col6" >0.298365</td> <td id="T_0910c_row5_col7" class="data row5 col7" >0.168659</td> <td id="T_0910c_row5_col8" class="data row5 col8" >0.220227</td> <td id="T_0910c_row5_col9" class="data row5 col9" >0.205216</td> <td id="T_0910c_row5_col10" class="data row5 col10" >1.653502</td> <td id="T_0910c_row5_col11" class="data row5 col11" >3.565510</td> </tr> <tr> <th id="T_0910c_level0_row6" class="row_heading level0 row6" >Mandarian</th> <td id="T_0910c_row6_col0" class="data row6 col0" >0.103694</td> <td id="T_0910c_row6_col1" class="data row6 col1" >0.132025</td> <td id="T_0910c_row6_col2" class="data row6 col2" >0.145878</td> <td id="T_0910c_row6_col3" class="data row6 col3" >0.170980</td> <td id="T_0910c_row6_col4" class="data row6 col4" >0.196867</td> <td id="T_0910c_row6_col5" class="data row6 col5" >0.291733</td> <td id="T_0910c_row6_col6" class="data row6 col6" >0.291183</td> <td id="T_0910c_row6_col7" class="data row6 col7" >0.102419</td> <td id="T_0910c_row6_col8" class="data row6 col8" >0.309782</td> <td id="T_0910c_row6_col9" class="data row6 col9" >0.130429</td> <td id="T_0910c_row6_col10" class="data row6 col10" >0.939545</td> <td id="T_0910c_row6_col11" class="data row6 col11" >0.238879</td> </tr> <tr> <th id="T_0910c_level0_row7" class="row_heading level0 row7" >Vietnamese</th> <td id="T_0910c_row7_col0" class="data row7 col0" >0.118693</td> <td id="T_0910c_row7_col1" class="data row7 col1" >0.134808</td> <td id="T_0910c_row7_col2" class="data row7 col2" >0.155110</td> <td id="T_0910c_row7_col3" class="data row7 col3" >0.148474</td> <td id="T_0910c_row7_col4" class="data row7 col4" >0.136075</td> <td id="T_0910c_row7_col5" class="data row7 col5" >0.164078</td> <td id="T_0910c_row7_col6" class="data row7 col6" >0.952040</td> <td id="T_0910c_row7_col7" class="data row7 col7" >0.205491</td> <td id="T_0910c_row7_col8" class="data row7 col8" >0.222001</td> <td id="T_0910c_row7_col9" class="data row7 col9" >0.186786</td> <td id="T_0910c_row7_col10" class="data row7 col10" >1.521174</td> <td id="T_0910c_row7_col11" class="data row7 col11" >1.805643</td> </tr> <tr> <th id="T_0910c_level0_row8" class="row_heading level0 row8" >Private Dataset</th> <td id="T_0910c_row8_col0" class="data row8 col0" >0.106150</td> <td id="T_0910c_row8_col1" class="data row8 col1" >0.112360</td> <td id="T_0910c_row8_col2" class="data row8 col2" >0.147258</td> <td id="T_0910c_row8_col3" class="data row8 col3" >0.116630</td> <td id="T_0910c_row8_col4" class="data row8 col4" >0.118434</td> <td id="T_0910c_row8_col5" class="data row8 col5" >0.143812</td> <td id="T_0910c_row8_col6" class="data row8 col6" >0.130667</td> <td id="T_0910c_row8_col7" class="data row8 col7" >0.222770</td> <td id="T_0910c_row8_col8" class="data row8 col8" >0.496540</td> <td id="T_0910c_row8_col9" class="data row8 col9" >0.164556</td> <td id="T_0910c_row8_col10" class="data row8 col10" >0.273304</td> <td id="T_0910c_row8_col11" class="data row8 col11" >0.229450</td> </tr> </tbody> </table> **Better Instruction Following and Audio Understanding** **MERaLiON-2-10B** exhibits substantial advancements in speech and audio understanding, as well as paralinguistic tasks. Notably, it adeptly handles complex instructions and responds with enhanced flexibility, effectively preserving the pre-trained knowledge from Gemma during the audio fine-tuning process. This capability enables MERaLiON-2-10B to provide detailed explanations regarding speech content and the speaker's emotional state. Furthermore, with appropriate prompt adjustments, the model can assume various roles, such as a voice assistant, virtual caregiver, or an integral component of sophisticated multi-agent AI systems and software solutions. Please visit [AudioBench benchmark](https://huggingface.co/spaces/MERaLiON/AudioBench-Leaderboard) for dataset-level evaluation results. <style type="text/css"> #T_b6ba8 th { text-align: center; } #T_b6ba8_row0_col0, #T_b6ba8_row2_col0, #T_b6ba8_row3_col0, #T_b6ba8_row5_col0, #T_b6ba8_row6_col0, #T_b6ba8_row8_col0, #T_b6ba8_row9_col0, #T_b6ba8_row10_col0 { text-align: center; } #T_b6ba8_row0_col1, #T_b6ba8_row0_col2, #T_b6ba8_row0_col3, #T_b6ba8_row0_col4, #T_b6ba8_row0_col5, #T_b6ba8_row0_col6, #T_b6ba8_row0_col7, #T_b6ba8_row0_col8, #T_b6ba8_row0_col9, #T_b6ba8_row0_col11, #T_b6ba8_row0_col12, #T_b6ba8_row0_col13, #T_b6ba8_row1_col1, #T_b6ba8_row1_col2, #T_b6ba8_row1_col3, #T_b6ba8_row1_col4, #T_b6ba8_row1_col5, #T_b6ba8_row1_col6, #T_b6ba8_row1_col7, #T_b6ba8_row1_col8, #T_b6ba8_row1_col9, #T_b6ba8_row1_col10, #T_b6ba8_row1_col11, #T_b6ba8_row1_col12, #T_b6ba8_row1_col13, #T_b6ba8_row2_col2, #T_b6ba8_row2_col3, #T_b6ba8_row2_col4, #T_b6ba8_row2_col5, #T_b6ba8_row2_col6, #T_b6ba8_row2_col7, #T_b6ba8_row2_col8, #T_b6ba8_row2_col9, #T_b6ba8_row2_col10, #T_b6ba8_row2_col11, #T_b6ba8_row2_col12, #T_b6ba8_row2_col13, #T_b6ba8_row3_col1, #T_b6ba8_row3_col3, #T_b6ba8_row3_col4, #T_b6ba8_row3_col5, #T_b6ba8_row3_col6, #T_b6ba8_row3_col7, #T_b6ba8_row3_col8, #T_b6ba8_row3_col9, #T_b6ba8_row3_col10, #T_b6ba8_row3_col11, #T_b6ba8_row3_col12, #T_b6ba8_row3_col13, #T_b6ba8_row4_col1, #T_b6ba8_row4_col2, #T_b6ba8_row4_col3, #T_b6ba8_row4_col4, #T_b6ba8_row4_col5, #T_b6ba8_row4_col6, #T_b6ba8_row4_col7, #T_b6ba8_row4_col8, #T_b6ba8_row4_col9, #T_b6ba8_row4_col10, #T_b6ba8_row4_col11, #T_b6ba8_row4_col12, #T_b6ba8_row4_col13, #T_b6ba8_row5_col1, #T_b6ba8_row5_col2, #T_b6ba8_row5_col3, #T_b6ba8_row5_col5, #T_b6ba8_row5_col6, #T_b6ba8_row5_col7, #T_b6ba8_row5_col8, #T_b6ba8_row5_col9, #T_b6ba8_row5_col10, #T_b6ba8_row5_col11, #T_b6ba8_row5_col12, #T_b6ba8_row5_col13, #T_b6ba8_row6_col1, #T_b6ba8_row6_col3, #T_b6ba8_row6_col4, #T_b6ba8_row6_col5, #T_b6ba8_row6_col6, #T_b6ba8_row6_col7, #T_b6ba8_row6_col8, #T_b6ba8_row6_col9, #T_b6ba8_row6_col10, #T_b6ba8_row6_col11, #T_b6ba8_row6_col12, #T_b6ba8_row6_col13, #T_b6ba8_row7_col1, #T_b6ba8_row7_col2, #T_b6ba8_row7_col3, #T_b6ba8_row7_col4, #T_b6ba8_row7_col5, #T_b6ba8_row7_col6, #T_b6ba8_row7_col7, #T_b6ba8_row7_col8, #T_b6ba8_row7_col9, #T_b6ba8_row7_col10, #T_b6ba8_row7_col11, #T_b6ba8_row7_col12, #T_b6ba8_row7_col13, #T_b6ba8_row8_col1, #T_b6ba8_row8_col2, #T_b6ba8_row8_col3, #T_b6ba8_row8_col4, #T_b6ba8_row8_col6, #T_b6ba8_row8_col7, #T_b6ba8_row8_col8, #T_b6ba8_row8_col9, #T_b6ba8_row8_col10, #T_b6ba8_row8_col11, #T_b6ba8_row8_col12, #T_b6ba8_row8_col13, #T_b6ba8_row9_col1, #T_b6ba8_row9_col2, #T_b6ba8_row9_col4, #T_b6ba8_row9_col5, #T_b6ba8_row9_col6, #T_b6ba8_row9_col7, #T_b6ba8_row9_col8, #T_b6ba8_row9_col9, #T_b6ba8_row9_col10, #T_b6ba8_row9_col11, #T_b6ba8_row9_col12, #T_b6ba8_row9_col13, #T_b6ba8_row10_col1, #T_b6ba8_row10_col3, #T_b6ba8_row10_col4, #T_b6ba8_row10_col5, #T_b6ba8_row10_col6, #T_b6ba8_row10_col7, #T_b6ba8_row10_col8, #T_b6ba8_row10_col9, #T_b6ba8_row10_col10, #T_b6ba8_row10_col11, #T_b6ba8_row10_col12, #T_b6ba8_row10_col13 { text-align: center; } #T_b6ba8_row0_col10, #T_b6ba8_row2_col1, #T_b6ba8_row3_col2, #T_b6ba8_row5_col4, #T_b6ba8_row6_col2, #T_b6ba8_row8_col5, #T_b6ba8_row9_col3, #T_b6ba8_row10_col2 { font-weight: bold; text-decoration: underline; text-align: center; } #T_b6ba8_row1_col0, #T_b6ba8_row4_col0, #T_b6ba8_row7_col0 { font-weight: bold; text-decoration: underline; text-align: center; } </style> <table id="T_b6ba8"> <thead> <tr> <th class="blank level0" >&nbsp;</th> <th id="T_b6ba8_level0_col0" class="col_heading level0 col0" >MERaLiON-2-10B</th> <th id="T_b6ba8_level0_col1" class="col_heading level0 col1" >MERaLiON-AudioLLM-Whisper-SEA-LION</th> <th id="T_b6ba8_level0_col2" class="col_heading level0 col2" >MERaLiON-2-10B-ASR</th> <th id="T_b6ba8_level0_col3" class="col_heading level0 col3" >MERaLiON-2-3B</th> <th id="T_b6ba8_level0_col4" class="col_heading level0 col4" >SeaLLMs-Audio-7B</th> <th id="T_b6ba8_level0_col5" class="col_heading level0 col5" >Qwen2-Audio-7B-Instruct</th> <th id="T_b6ba8_level0_col6" class="col_heading level0 col6" >Qwen2.5-Omni-3B</th> <th id="T_b6ba8_level0_col7" class="col_heading level0 col7" >phi_4_multimodal_instruct</th> <th id="T_b6ba8_level0_col8" class="col_heading level0 col8" >cascade-whisper_large_v3-llama_3_8b_instruct</th> <th id="T_b6ba8_level0_col9" class="col_heading level0 col9" >Qwen2.5-Omni-7B</th> <th id="T_b6ba8_level0_col10" class="col_heading level0 col10" >cascade-whisper_large_v2-gemma2_9b_cpt-sea_lionv3_instruct</th> <th id="T_b6ba8_level0_col11" class="col_heading level0 col11" >Qwen-Audio-Chat</th> <th id="T_b6ba8_level0_col12" class="col_heading level0 col12" >SALMONN_7B</th> <th id="T_b6ba8_level0_col13" class="col_heading level0 col13" >WavLLM_fairseq</th> </tr> </thead> <tbody> <tr> <th id="T_b6ba8_level0_row0" class="row_heading level0 row0" >Speech Instruction</th> <td id="T_b6ba8_row0_col0" class="data row0 col0" >70.200000</td> <td id="T_b6ba8_row0_col1" class="data row0 col1" >70.800000</td> <td id="T_b6ba8_row0_col2" class="data row0 col2" >13.400000</td> <td id="T_b6ba8_row0_col3" class="data row0 col3" >19.100000</td> <td id="T_b6ba8_row0_col4" class="data row0 col4" >66.900000</td> <td id="T_b6ba8_row0_col5" class="data row0 col5" >48.700000</td> <td id="T_b6ba8_row0_col6" class="data row0 col6" >65.000000</td> <td id="T_b6ba8_row0_col7" class="data row0 col7" >36.200000</td> <td id="T_b6ba8_row0_col8" class="data row0 col8" >66.100000</td> <td id="T_b6ba8_row0_col9" class="data row0 col9" >58.300000</td> <td id="T_b6ba8_row0_col10" class="data row0 col10" >72.900000</td> <td id="T_b6ba8_row0_col11" class="data row0 col11" >10.200000</td> <td id="T_b6ba8_row0_col12" class="data row0 col12" >12.900000</td> <td id="T_b6ba8_row0_col13" class="data row0 col13" >20.400000</td> </tr> <tr> <th id="T_b6ba8_level0_row1" class="row_heading level0 row1" >Emotion Recognition</th> <td id="T_b6ba8_row1_col0" class="data row1 col0" >63.736268</td> <td id="T_b6ba8_row1_col1" class="data row1 col1" >48.577313</td> <td id="T_b6ba8_row1_col2" class="data row1 col2" >53.693298</td> <td id="T_b6ba8_row1_col3" class="data row1 col3" >54.040797</td> <td id="T_b6ba8_row1_col4" class="data row1 col4" >52.007576</td> <td id="T_b6ba8_row1_col5" class="data row1 col5" >49.846540</td> <td id="T_b6ba8_row1_col6" class="data row1 col6" >33.037836</td> <td id="T_b6ba8_row1_col7" class="data row1 col7" >40.677800</td> <td id="T_b6ba8_row1_col8" class="data row1 col8" >50.937578</td> <td id="T_b6ba8_row1_col9" class="data row1 col9" >31.469397</td> <td id="T_b6ba8_row1_col10" class="data row1 col10" >48.214969</td> <td id="T_b6ba8_row1_col11" class="data row1 col11" >41.671551</td> <td id="T_b6ba8_row1_col12" class="data row1 col12" >33.584869</td> <td id="T_b6ba8_row1_col13" class="data row1 col13" >50.801545</td> </tr> <tr> <th id="T_b6ba8_level0_row2" class="row_heading level0 row2" >Audio Scene Question Answering</th> <td id="T_b6ba8_row2_col0" class="data row2 col0" >51.140374</td> <td id="T_b6ba8_row2_col1" class="data row2 col1" >52.207756</td> <td id="T_b6ba8_row2_col2" class="data row2 col2" >49.511886</td> <td id="T_b6ba8_row2_col3" class="data row2 col3" >46.141353</td> <td id="T_b6ba8_row2_col4" class="data row2 col4" >50.193739</td> <td id="T_b6ba8_row2_col5" class="data row2 col5" >47.048025</td> <td id="T_b6ba8_row2_col6" class="data row2 col6" >48.123228</td> <td id="T_b6ba8_row2_col7" class="data row2 col7" >42.217143</td> <td id="T_b6ba8_row2_col8" class="data row2 col8" >21.876943</td> <td id="T_b6ba8_row2_col9" class="data row2 col9" >45.669153</td> <td id="T_b6ba8_row2_col10" class="data row2 col10" >18.043681</td> <td id="T_b6ba8_row2_col11" class="data row2 col11" >51.618622</td> <td id="T_b6ba8_row2_col12" class="data row2 col12" >51.816958</td> <td id="T_b6ba8_row2_col13" class="data row2 col13" >33.034083</td> </tr> <tr> <th id="T_b6ba8_level0_row3" class="row_heading level0 row3" >Gender Recognition</th> <td id="T_b6ba8_row3_col0" class="data row3 col0" >95.109423</td> <td id="T_b6ba8_row3_col1" class="data row3 col1" >97.177396</td> <td id="T_b6ba8_row3_col2" class="data row3 col2" >97.220335</td> <td id="T_b6ba8_row3_col3" class="data row3 col3" >93.810266</td> <td id="T_b6ba8_row3_col4" class="data row3 col4" >75.449392</td> <td id="T_b6ba8_row3_col5" class="data row3 col5" >95.963266</td> <td id="T_b6ba8_row3_col6" class="data row3 col6" >47.867210</td> <td id="T_b6ba8_row3_col7" class="data row3 col7" >70.718047</td> <td id="T_b6ba8_row3_col8" class="data row3 col8" >57.039409</td> <td id="T_b6ba8_row3_col9" class="data row3 col9" >48.724711</td> <td id="T_b6ba8_row3_col10" class="data row3 col10" >19.421130</td> <td id="T_b6ba8_row3_col11" class="data row3 col11" >60.349349</td> <td id="T_b6ba8_row3_col12" class="data row3 col12" >84.365092</td> <td id="T_b6ba8_row3_col13" class="data row3 col13" >60.773275</td> </tr> <tr> <th id="T_b6ba8_level0_row4" class="row_heading level0 row4" >Spoken QA (Singlish)</th> <td id="T_b6ba8_row4_col0" class="data row4 col0" >66.550000</td> <td id="T_b6ba8_row4_col1" class="data row4 col1" >58.900000</td> <td id="T_b6ba8_row4_col2" class="data row4 col2" >61.850000</td> <td id="T_b6ba8_row4_col3" class="data row4 col3" >59.700000</td> <td id="T_b6ba8_row4_col4" class="data row4 col4" >51.350000</td> <td id="T_b6ba8_row4_col5" class="data row4 col5" >46.700000</td> <td id="T_b6ba8_row4_col6" class="data row4 col6" >60.500000</td> <td id="T_b6ba8_row4_col7" class="data row4 col7" >61.950000</td> <td id="T_b6ba8_row4_col8" class="data row4 col8" >59.350000</td> <td id="T_b6ba8_row4_col9" class="data row4 col9" >58.400000</td> <td id="T_b6ba8_row4_col10" class="data row4 col10" >53.750000</td> <td id="T_b6ba8_row4_col11" class="data row4 col11" >42.300000</td> <td id="T_b6ba8_row4_col12" class="data row4 col12" >43.200000</td> <td id="T_b6ba8_row4_col13" class="data row4 col13" >51.200000</td> </tr> <tr> <th id="T_b6ba8_level0_row5" class="row_heading level0 row5" >Audio Captioning</th> <td id="T_b6ba8_row5_col0" class="data row5 col0" >35.604270</td> <td id="T_b6ba8_row5_col1" class="data row5 col1" >36.976419</td> <td id="T_b6ba8_row5_col2" class="data row5 col2" >34.466710</td> <td id="T_b6ba8_row5_col3" class="data row5 col3" >33.243839</td> <td id="T_b6ba8_row5_col4" class="data row5 col4" >45.089372</td> <td id="T_b6ba8_row5_col5" class="data row5 col5" >37.278810</td> <td id="T_b6ba8_row5_col6" class="data row5 col6" >39.200328</td> <td id="T_b6ba8_row5_col7" class="data row5 col7" >30.832409</td> <td id="T_b6ba8_row5_col8" class="data row5 col8" >2.915778</td> <td id="T_b6ba8_row5_col9" class="data row5 col9" >31.896243</td> <td id="T_b6ba8_row5_col10" class="data row5 col10" >3.140568</td> <td id="T_b6ba8_row5_col11" class="data row5 col11" >39.988663</td> <td id="T_b6ba8_row5_col12" class="data row5 col12" >28.880570</td> <td id="T_b6ba8_row5_col13" class="data row5 col13" >6.200867</td> </tr> <tr> <th id="T_b6ba8_level0_row6" class="row_heading level0 row6" >Spoken Dialogue Summarisation</th> <td id="T_b6ba8_row6_col0" class="data row6 col0" >53.100000</td> <td id="T_b6ba8_row6_col1" class="data row6 col1" >53.600000</td> <td id="T_b6ba8_row6_col2" class="data row6 col2" >55.800000</td> <td id="T_b6ba8_row6_col3" class="data row6 col3" >48.550000</td> <td id="T_b6ba8_row6_col4" class="data row6 col4" >45.450000</td> <td id="T_b6ba8_row6_col5" class="data row6 col5" >36.300000</td> <td id="T_b6ba8_row6_col6" class="data row6 col6" >46.750000</td> <td id="T_b6ba8_row6_col7" class="data row6 col7" >50.750000</td> <td id="T_b6ba8_row6_col8" class="data row6 col8" >45.850000</td> <td id="T_b6ba8_row6_col9" class="data row6 col9" >43.150000</td> <td id="T_b6ba8_row6_col10" class="data row6 col10" >51.000000</td> <td id="T_b6ba8_row6_col11" class="data row6 col11" >25.250000</td> <td id="T_b6ba8_row6_col12" class="data row6 col12" >14.400000</td> <td id="T_b6ba8_row6_col13" class="data row6 col13" >39.450000</td> </tr> <tr> <th id="T_b6ba8_level0_row7" class="row_heading level0 row7" >Spoken QA (English)</th> <td id="T_b6ba8_row7_col0" class="data row7 col0" >79.735049</td> <td id="T_b6ba8_row7_col1" class="data row7 col1" >63.711481</td> <td id="T_b6ba8_row7_col2" class="data row7 col2" >73.975834</td> <td id="T_b6ba8_row7_col3" class="data row7 col3" >68.715179</td> <td id="T_b6ba8_row7_col4" class="data row7 col4" >70.920519</td> <td id="T_b6ba8_row7_col5" class="data row7 col5" >68.888565</td> <td id="T_b6ba8_row7_col6" class="data row7 col6" >67.818546</td> <td id="T_b6ba8_row7_col7" class="data row7 col7" >75.513152</td> <td id="T_b6ba8_row7_col8" class="data row7 col8" >78.526569</td> <td id="T_b6ba8_row7_col9" class="data row7 col9" >68.415131</td> <td id="T_b6ba8_row7_col10" class="data row7 col10" >67.814538</td> <td id="T_b6ba8_row7_col11" class="data row7 col11" >66.069047</td> <td id="T_b6ba8_row7_col12" class="data row7 col12" >60.649071</td> <td id="T_b6ba8_row7_col13" class="data row7 col13" >70.595242</td> </tr> <tr> <th id="T_b6ba8_level0_row8" class="row_heading level0 row8" >Music Understanding</th> <td id="T_b6ba8_row8_col0" class="data row8 col0" >63.942713</td> <td id="T_b6ba8_row8_col1" class="data row8 col1" >51.347936</td> <td id="T_b6ba8_row8_col2" class="data row8 col2" >60.657119</td> <td id="T_b6ba8_row8_col3" class="data row8 col3" >55.602359</td> <td id="T_b6ba8_row8_col4" class="data row8 col4" >63.689975</td> <td id="T_b6ba8_row8_col5" class="data row8 col5" >71.609099</td> <td id="T_b6ba8_row8_col6" class="data row8 col6" >59.309183</td> <td id="T_b6ba8_row8_col7" class="data row8 col7" >55.265375</td> <td id="T_b6ba8_row8_col8" class="data row8 col8" >56.697557</td> <td id="T_b6ba8_row8_col9" class="data row8 col9" >47.598989</td> <td id="T_b6ba8_row8_col10" class="data row8 col10" >50.463353</td> <td id="T_b6ba8_row8_col11" class="data row8 col11" >59.056445</td> <td id="T_b6ba8_row8_col12" class="data row8 col12" >49.705139</td> <td id="T_b6ba8_row8_col13" class="data row8 col13" >44.313395</td> </tr> <tr> <th id="T_b6ba8_level0_row9" class="row_heading level0 row9" >Accent Recognition</th> <td id="T_b6ba8_row9_col0" class="data row9 col0" >41.815396</td> <td id="T_b6ba8_row9_col1" class="data row9 col1" >43.799799</td> <td id="T_b6ba8_row9_col2" class="data row9 col2" >47.788864</td> <td id="T_b6ba8_row9_col3" class="data row9 col3" >60.054981</td> <td id="T_b6ba8_row9_col4" class="data row9 col4" >10.143836</td> <td id="T_b6ba8_row9_col5" class="data row9 col5" >10.901397</td> <td id="T_b6ba8_row9_col6" class="data row9 col6" >0.478694</td> <td id="T_b6ba8_row9_col7" class="data row9 col7" >3.097615</td> <td id="T_b6ba8_row9_col8" class="data row9 col8" >21.398482</td> <td id="T_b6ba8_row9_col9" class="data row9 col9" >0.587293</td> <td id="T_b6ba8_row9_col10" class="data row9 col10" >25.929693</td> <td id="T_b6ba8_row9_col11" class="data row9 col11" >17.550294</td> <td id="T_b6ba8_row9_col12" class="data row9 col12" >11.577381</td> <td id="T_b6ba8_row9_col13" class="data row9 col13" >14.294613</td> </tr> <tr> <th id="T_b6ba8_level0_row10" class="row_heading level0 row10" >Speech Translation</th> <td id="T_b6ba8_row10_col0" class="data row10 col0" >27.391115</td> <td id="T_b6ba8_row10_col1" class="data row10 col1" >27.086366</td> <td id="T_b6ba8_row10_col2" class="data row10 col2" >28.540359</td> <td id="T_b6ba8_row10_col3" class="data row10 col3" >22.130258</td> <td id="T_b6ba8_row10_col4" class="data row10 col4" >21.143215</td> <td id="T_b6ba8_row10_col5" class="data row10 col5" >10.826666</td> <td id="T_b6ba8_row10_col6" class="data row10 col6" >21.776628</td> <td id="T_b6ba8_row10_col7" class="data row10 col7" >13.827110</td> <td id="T_b6ba8_row10_col8" class="data row10 col8" >13.536272</td> <td id="T_b6ba8_row10_col9" class="data row10 col9" >20.688241</td> <td id="T_b6ba8_row10_col10" class="data row10 col10" >21.437997</td> <td id="T_b6ba8_row10_col11" class="data row10 col11" >4.973184</td> <td id="T_b6ba8_row10_col12" class="data row10 col12" >13.486003</td> <td id="T_b6ba8_row10_col13" class="data row10 col13" >9.046791</td> </tr> </tbody> </table> ## How to Use > [!WARNING] > **Out of Scope use**: This model is not intended for use in tool calling, math, and coding tasks. MERaLiON-2 requires `transformers` version `4.50.1` ``` pip install transformers==4.50.1 pip install librosa ``` To run in GPU, MERaLiON-2 requires `flash-attn`. ``` pip install flash-attn --no-build-isolation ``` > [!TIP] > Should you face any difficulties installing the above packages, you can try installing within this Docker container instead: > `pytorch/pytorch:2.5.1-cuda12.1-cudnn9-devel`, whose cuda and torch environments have been tested working. ### Audio Input - For ASR tasks, the maximum audio length is suggested to be 30 seconds at 16,000 Hz. - For general speech & audio understanding tasks, the maximum audio length is suggested to be 300 seconds at 16,000 Hz sampling rate. ### Text Prompt MERaLiON-2 is trained with this prompt template: ``` Instruction: <TextHere> \nFollow the text instruction based on the following audio: <SpeechHere> ``` It is generally recommended to follow this template, i.e., replace `<TextHere>` with your text instruction while leaving the `<SpeechHere>` untouched. We list a few useful example prompts here: **Standard prompts for better accuracy** ```python prompt_template = "Instruction: {query} \nFollow the text instruction based on the following audio: <SpeechHere>" transcription_prompt = prompt_template.format(query="Please transcribe the speech") translation_prompt = prompt_template.format(query="Please translate the speech into xxx") ``` > [!WARNING] > Other prompts might not perform well on MERaLiON-2-10B-ASR. ### Huggingface Inference with CPU ```python import librosa from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor repo_id = "MERaLiON/MERaLiON-2-10B-ASR" processor = AutoProcessor.from_pretrained( repo_id, trust_remote_code=True, ) model = AutoModelForSpeechSeq2Seq.from_pretrained( repo_id, use_safetensors=True, trust_remote_code=True, ) prompt_template = "Instruction: {query} \nFollow the text instruction based on the following audio: <SpeechHere>" transcribe_prompt = "Please transcribe this speech." translate_prompt = "Can you please translate this speech into written Chinese?" # batch inference of 2 samples conversation = [ [{"role": "user", "content": prompt_template.format(query=transcribe_prompt)}], [{"role": "user", "content": prompt_template.format(query=translate_prompt)}], ] chat_prompt = processor.tokenizer.apply_chat_template( conversation=conversation, tokenize=False, add_generation_prompt=True ) # Use audio at 16000hz. audio_array, sample_rate = librosa.load("/path/to/your/audio/file", sr=16000) audio_array = [audio_array]*2 inputs = processor(text=chat_prompt, audios=audio_array) # adjust the `max_new_tokens` based on your use case. outputs = model.generate(**inputs, max_new_tokens=256) generated_ids = outputs[:, inputs['input_ids'].size(1):] response = processor.batch_decode(generated_ids, skip_special_tokens=True) ``` ### Huggingface GPU Inference ```python import torch import librosa from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor repo_id = "MERaLiON/MERaLiON-2-10B-ASR" device = "cuda" processor = AutoProcessor.from_pretrained( repo_id, trust_remote_code=True, ) model = AutoModelForSpeechSeq2Seq.from_pretrained( repo_id, use_safetensors=True, trust_remote_code=True, attn_implementation="flash_attention_2", torch_dtype=torch.bfloat16 ).to(device) prompt_template = "Instruction: {query} \nFollow the text instruction based on the following audio: <SpeechHere>" transcribe_prompt = "Please transcribe this speech." translate_prompt = "Can you please translate this speech into written Chinese?" # batch inference of 2 samples conversation = [ [{"role": "user", "content": prompt_template.format(query=transcribe_prompt)}], [{"role": "user", "content": prompt_template.format(query=translate_prompt)}], ] chat_prompt = processor.tokenizer.apply_chat_template( conversation=conversation, tokenize=False, add_generation_prompt=True ) # Use audio at 16000hz. audio_array, sample_rate = librosa.load("/path/to/your/audio/file", sr=16000) audio_array = [audio_array]*2 inputs = processor(text=chat_prompt, audios=audio_array) for key, value in inputs.items(): if isinstance(value, torch.Tensor): inputs[key] = inputs[key].to(device) if value.dtype == torch.float32: inputs[key] = inputs[key].to(torch.bfloat16) # adjust the `max_new_tokens` based on your use case. outputs = model.generate(**inputs, max_new_tokens=256) generated_ids = outputs[:, inputs['input_ids'].size(1):] response = processor.batch_decode(generated_ids, skip_special_tokens=True) ``` ## โš ๏ธ Disclaimer The current MERaLiON-2 has not been specifically aligned for safety and may generate content that is inappropriate, offensive, or harmful. Developers and users are responsible for performing their own safety fine-tuning and implementing necessary security measures. The authors shall not be held liable for any claims, damages, or other liabilities arising from the use of the released models, weights, or code. ### Compute and Infrastructure MERaLiON-2 was trained on the [**ASPIRE 2A+**](https://help.nscc.sg/aspire2aplus/about/) Supercomputer Cluster, provided by [**National Supercomputing Centre (NSCC)**](https://www.nscc.sg/), Singapore. ASPIRE 2A+ cluster provides multiple H100 nodes, with each compute node equipped with 8 Nvidia H100 GPUs, 2 TB of RAM, and 30 TB of locally attached NVMe storage. These nodes are interconnected via a rail-optimised, full fat-tree topology, utilising 400 Gb/s NDR InfiniBand cables. Additionally, the cluster incorporates a 2.5 PB SSD-based Lustre file system, linked to the H100 nodes through high-speed InfiniBand connections. With a global batch size of 768, we trained the current release of MERaLiON-2 for around 200k steps, which took around 2 days to complete using 16 nodes, 128 H100 GPUs. ## ๐Ÿ“š Citation If you find our work useful, please cite our papers: [MERaLiON-AudioLLM: Bridging Audio and Language with Large Language Models](https://arxiv.org/abs/2412.09818) <br> [AudioBench: A Universal Benchmark for Audio Large Language Models](https://aclanthology.org/2025.naacl-long.218/) <br> [Advancing Singlish Understanding: Bridging the Gap with Datasets and Multimodal Models](https://arxiv.org/abs/2501.01034) <br> [MoWE-Audio: Multitask AudioLLMs with Mixture of Weak Encoders](https://arxiv.org/abs/2409.06635) <br> ``` @misc{he2024meralionaudiollmtechnicalreport, title={MERaLiON-AudioLLM: Bridging Audio and Language with Large Language Models}, author={{MERaLiON Team}}, year={2024}, eprint={2412.09818}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2412.09818}, } ``` ``` @article{wang2024audiobench, title={AudioBench: A Universal Benchmark for Audio Large Language Models}, author={Wang, Bin and Zou, Xunlong and Lin, Geyu and Sun, Shuo and Liu, Zhuohan and Zhang, Wenyu and Liu, Zhengyuan and Aw, AiTi and Chen, Nancy F}, journal={NAACL}, year={2025} } ``` ``` @article{wang2025advancing, title={Advancing Singlish Understanding: Bridging the Gap with Datasets and Multimodal Models}, author={Wang, Bin and Zou, Xunlong and Sun, Shuo and Zhang, Wenyu and He, Yingxu and Liu, Zhuohan and Wei, Chengwei and Chen, Nancy F and Aw, AiTi}, journal={arXiv preprint arXiv:2501.01034}, year={2025} } ``` ``` @article{zhang2024mowe, title={MoWE-Audio: Multitask AudioLLMs with Mixture of Weak Encoders}, author={Zhang, Wenyu and Sun, Shuo and Wang, Bin and Zou, Xunlong and Liu, Zhuohan and He, Yingxu and Lin, Geyu and Chen, Nancy F and Aw, Ai Ti}, journal={ICASSP}, year={2025} } ```
endlesstools/etMVAdapter-endpoint
endlesstools
2025-06-05T10:54:00Z
0
0
null
[ "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-04T15:19:33Z
--- license: apache-2.0 ---
MERaLiON/MERaLiON-2-10B
MERaLiON
2025-06-05T10:53:43Z
115
5
transformers
[ "transformers", "safetensors", "meralion2", "automatic-speech-recognition", "meralion", "meralion-2", "custom_code", "en", "zh", "ms", "ta", "id", "th", "vi", "dataset:MERaLiON/Multitask-National-Speech-Corpus-v1", "arxiv:2412.09818", "arxiv:2501.01034", "arxiv:2409.06635", "arxiv:2501.08335", "base_model:google/gemma-2-9b-it", "base_model:finetune:google/gemma-2-9b-it", "license:other", "region:us" ]
automatic-speech-recognition
2025-05-20T08:09:20Z
--- license: other datasets: - MERaLiON/Multitask-National-Speech-Corpus-v1 language: - en - zh - ms - ta - id - th - vi metrics: - wer - bleu base_model: - openai/whisper-large-v3 - google/gemma-2-9b-it library_name: transformers tags: - meralion - meralion-2 --- <h1 align="center">๐Ÿ”ฅ MERaLiON-2 ๐Ÿ”ฅ</h1> <p align="center"> <a href="https://huggingface.co/MERaLiON/MERaLiON-2-10B">๐Ÿš€ MERaLiON-2-10B</a> | <a href="https://huggingface.co/MERaLiON/MERaLiON-2-10B-ASR">๐Ÿš€ MERaLiON-2-10B-ASR</a> | <a href="https://huggingface.co/MERaLiON/MERaLiON-2-3B">๐Ÿš€ MERaLiON-2-3B</a> | <a href="https://meralion.org/demo/">๐Ÿ’ป Web Demo</a> </p> ## Introduction We are pleased to announce the release of **MERaLiON2**, the latest addition to the MERaLiON family of speech-text large language models. Our flagship model, [**MERaLiON-2-10B**](https://huggingface.co/MERaLiON/MERaLiON-2-10B), demonstrates competitive performance across benchmark evaluations in tasks such as multilingual automatic speech recognition (ASR), speech translation (ST), audio scene understanding, emotion recognition, and general speech comprehension. These results are comparable to those achieved by other state-of-the-art open-source AudioLLMs, including Qwen2.5-Omni-7B and Phi-4-multimodal-instruct. MERaLiON-2-10B is specifically designed to follow complex instructions with a nuanced understanding of **Singaporeโ€™s multilingual and multicultural context**. It integrates a localized Whisper-large-v3 speech encoder and Gemma-2-9b text decoder. The following graph presents task-specific evaluation scores, assessed using the **LLM-as-a-Judge** framework across multiple datasets. For the speech translation task, performance is measured using the BLEU metric, where higher scores indicate better translation quality. <img src="radar_task.png" alt="model_capability" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> In addition, we introduce an ASR-optimized variant, [**MERaLiON-2-10B-ASR**](https://huggingface.co/MERaLiON/MERaLiON-2-10B-ASR), which delivers a **5โ€“30%** performance improvement over OpenAIโ€™s `whisper-large-v3` on speech recognition tasks. This enhancement spans Singaporeโ€™s 4 official languagesโ€”**English**, **Mandarin**, **Malay**, and **Tamil**โ€”as well as 3 South-East Asian languages: **Indonesian**, **Thai**, and **Vietnamese**. The model also demonstrates robust handling of **code-switching scenarios** and local colloquialisms, reflecting its adaptability to Singaporeโ€™s diverse linguistic landscape. The following visualization illustrates the **1 - Word Error Rate (WER)** metric across these seven languages, comparing MERaLiON-2-10B-ASR with other leading models. A higher value indicates better transcription accuracy. <img src="radar_asr.png" alt="model_capability" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> We also provide [MERaLiON-2-3B](https://huggingface.co/MERaLiON/MERaLiON-2-3B) that balances performance with reduced computational requirements, enabling broader accessibility and lightweight deployment. - **Extended Audio Length**: Support audio inputs up to 300 seconds (5 minutes) for audio & speech question answering tasks, **30s for a satisfactory performance for speech transcription (ASR) and speech translation (ST) tasks**. - **Expanded Language Coverage**: In addition to English, Chinese, and Singlish, V2 introduces support for Malay, Tamil, and other South-East Asia languages including Indonesian, Thai, and Vietnamese. - **Improved Performance**: Achieves higher performance across a wide range of tasks. See the [Evaluation](#performance) section for detailed benchmarks. - **Higher Quality Training Data**: Trained on 120,000 hours of curated speech and audio data, filtered for quality and diversity, with an emphasis on local and multilingual audio sources. - **Three Model Variants**: Available in general-purpose ([MERaLiON-2-10B](https://huggingface.co/MERaLiON/MERaLiON-2-10B)), ASR-optimized ([MERaLiON-2-10B-ASR](https://huggingface.co/MERaLiON/MERaLiON-2-10B-ASR)) and light-weight ([MERaLiON-2-3B](https://huggingface.co/MERaLiON/MERaLiON-2-3B)) configurations to balance latency, compute efficiency, and task performance across different deployment needs. ## Model Description: MERaLiON stands for **M**ultimodal **E**mpathetic **R**easoning **a**nd **L**earning **i**n **O**ne **N**etwork. MERaLiON-2 is a family of Speech-Text Large Language Models tailored for **Singaporeโ€™s multilingual and multicultural landscape**, as well as the wider **Southeast Asian region**. The 10B model integrates a localized [Whisper-Large-V3](https://huggingface.co/openai/whisper-large-v3) speech encoder with the [Gemma2-9b-IT](https://huggingface.co/google/gemma-2-9b-it) text decoder. The 3B model integrates a localized [Whisper-Large-V3](https://huggingface.co/openai/whisper-large-v3) speech encoder with the [Gemma2-2b-IT](https://huggingface.co/google/gemma-2-2b-it) text decoder. MERaLiON-2-10B is finetuned on **120,000 hours of speech and audio data** across **6 diverse tasks**: Automatic Speech Recognition (ASR), Spoken Question Answering (SQA), Spoken Dialogue Summarization (SDS), Audio Captioning (AC), Audio-Scene Question Answering (ASQA) and Paralinguistic Question Answering (PQA). The model supports long-form audio inputs of up to 300 seconds (5 minutes) and is specifically adapted to handle the linguistic nuances, accents, and dialects commonly found across Singapore and neighboring countries. - **Developed by:** I<sup>2</sup>R, A\*STAR, Singapore - **Model type:** Multimodal LLM - **Language(s):** Primarily English (Global and Singapore), Chinese, with support for audio of regional languages including Malay, Tamil, Indonesian, Thai, and Vietnamese. - **Audio:** **Mono** channel audio, **16000** hz, up to **300** seconds. - **License:** [MERaLiON Public License](MERaLiON-Public-Licence-v2.pdf) - **Demo:** [MERaLiON-AudioLLM Web Demo](https://meralion.org/demo/) **MERaLiON-2** is an upgraded version of [MERaLiON-AudioLLM](https://huggingface.co/MERaLiON/MERaLiON-AudioLLM-Whisper-SEA-LION). ## Performance: We benchmark MERaLiON-2 series models with extended [AudioBench benchmark](https://huggingface.co/spaces/MERaLiON/AudioBench-Leaderboard) againstย several recently released open-source multimodal models โ€” SALMONN-7B, Qwen2.5-Omni series and Phi-4-Multimodal โ€” as well as two cascade model. **Better Automatic Speech Recognition (ASR) Accuracy** MERaLiON-2-10B-ASR and MERaLiON-2-10B demonstrate leading performance in Singlish, Mandarin, Malay, Tamil, and other Southeast Asian languages, while maintaining competitive results in English compared to `Whisper-large-v3`. The following table shows the average transcription `Word Error Rate` by language for the MERaLiON family and other leading AudioLLMs. The `Private Dataset` includes a collection of Singapore's locally accented speeches with code-switch. Please visit [AudioBench benchmark](https://huggingface.co/spaces/MERaLiON/AudioBench-Leaderboard) for dataset-level evaluation results. <style type="text/css"> #T_0910c th { text-align: center; } #T_0910c_row0_col0, #T_0910c_row1_col0, #T_0910c_row2_col0, #T_0910c_row3_col0, #T_0910c_row4_col0, #T_0910c_row5_col0, #T_0910c_row6_col7, #T_0910c_row7_col0, #T_0910c_row8_col0 { font-weight: bold; text-decoration: underline; text-align: center; } #T_0910c_row0_col1, #T_0910c_row1_col1, #T_0910c_row2_col1, #T_0910c_row3_col1, #T_0910c_row4_col1, #T_0910c_row5_col1, #T_0910c_row6_col1, #T_0910c_row7_col1, #T_0910c_row8_col1 { text-align: center; } #T_0910c_row0_col2, #T_0910c_row0_col3, #T_0910c_row0_col4, #T_0910c_row0_col5, #T_0910c_row0_col6, #T_0910c_row0_col7, #T_0910c_row0_col8, #T_0910c_row0_col9, #T_0910c_row0_col10, #T_0910c_row0_col11, #T_0910c_row1_col2, #T_0910c_row1_col3, #T_0910c_row1_col4, #T_0910c_row1_col5, #T_0910c_row1_col6, #T_0910c_row1_col7, #T_0910c_row1_col8, #T_0910c_row1_col9, #T_0910c_row1_col10, #T_0910c_row1_col11, #T_0910c_row2_col2, #T_0910c_row2_col3, #T_0910c_row2_col4, #T_0910c_row2_col5, #T_0910c_row2_col6, #T_0910c_row2_col7, #T_0910c_row2_col8, #T_0910c_row2_col9, #T_0910c_row2_col10, #T_0910c_row2_col11, #T_0910c_row3_col2, #T_0910c_row3_col3, #T_0910c_row3_col4, #T_0910c_row3_col5, #T_0910c_row3_col6, #T_0910c_row3_col7, #T_0910c_row3_col8, #T_0910c_row3_col9, #T_0910c_row3_col10, #T_0910c_row3_col11, #T_0910c_row4_col2, #T_0910c_row4_col3, #T_0910c_row4_col4, #T_0910c_row4_col5, #T_0910c_row4_col6, #T_0910c_row4_col7, #T_0910c_row4_col8, #T_0910c_row4_col9, #T_0910c_row4_col10, #T_0910c_row4_col11, #T_0910c_row5_col2, #T_0910c_row5_col3, #T_0910c_row5_col4, #T_0910c_row5_col5, #T_0910c_row5_col6, #T_0910c_row5_col7, #T_0910c_row5_col8, #T_0910c_row5_col9, #T_0910c_row5_col10, #T_0910c_row5_col11, #T_0910c_row6_col0, #T_0910c_row6_col2, #T_0910c_row6_col3, #T_0910c_row6_col4, #T_0910c_row6_col5, #T_0910c_row6_col6, #T_0910c_row6_col8, #T_0910c_row6_col9, #T_0910c_row6_col10, #T_0910c_row6_col11, #T_0910c_row7_col2, #T_0910c_row7_col3, #T_0910c_row7_col4, #T_0910c_row7_col5, #T_0910c_row7_col6, #T_0910c_row7_col7, #T_0910c_row7_col8, #T_0910c_row7_col9, #T_0910c_row7_col10, #T_0910c_row7_col11, #T_0910c_row8_col2, #T_0910c_row8_col3, #T_0910c_row8_col4, #T_0910c_row8_col5, #T_0910c_row8_col6, #T_0910c_row8_col7, #T_0910c_row8_col8, #T_0910c_row8_col9, #T_0910c_row8_col10, #T_0910c_row8_col11 { text-align: center; } </style> <table id="T_0910c"> <thead> <tr> <th class="blank level0" >&nbsp;</th> <th id="T_0910c_level0_col0" class="col_heading level0 col0" >MERaLiON-2-10B-ASR</th> <th id="T_0910c_level0_col1" class="col_heading level0 col1" >MERaLiON-2-10B</th> <th id="T_0910c_level0_col2" class="col_heading level0 col2" >MERaLiON-2-3B</th> <th id="T_0910c_level0_col3" class="col_heading level0 col3" >whisper_large_v3</th> <th id="T_0910c_level0_col4" class="col_heading level0 col4" >cascade-whisper_large_v3-llama_3_8b_instruct</th> <th id="T_0910c_level0_col5" class="col_heading level0 col5" >cascade-whisper_large_v2-gemma2_9b_cpt-sea_lionv3_instruct</th> <th id="T_0910c_level0_col6" class="col_heading level0 col6" >MERaLiON-AudioLLM-Whisper-SEA-LION</th> <th id="T_0910c_level0_col7" class="col_heading level0 col7" >Qwen2.5-Omni-7B</th> <th id="T_0910c_level0_col8" class="col_heading level0 col8" >SeaLLMs-Audio-7B</th> <th id="T_0910c_level0_col9" class="col_heading level0 col9" >Qwen2.5-Omni-3B</th> <th id="T_0910c_level0_col10" class="col_heading level0 col10" >SALMONN_7B</th> <th id="T_0910c_level0_col11" class="col_heading level0 col11" >phi_4_multimodal_instruct</th> </tr> </thead> <tbody> <tr> <th id="T_0910c_level0_row0" class="row_heading level0 row0" >Thai</th> <td id="T_0910c_row0_col0" class="data row0 col0" >0.096526</td> <td id="T_0910c_row0_col1" class="data row0 col1" >0.109365</td> <td id="T_0910c_row0_col2" class="data row0 col2" >0.107279</td> <td id="T_0910c_row0_col3" class="data row0 col3" >0.121073</td> <td id="T_0910c_row0_col4" class="data row0 col4" >0.120257</td> <td id="T_0910c_row0_col5" class="data row0 col5" >0.172105</td> <td id="T_0910c_row0_col6" class="data row0 col6" >0.919330</td> <td id="T_0910c_row0_col7" class="data row0 col7" >0.126497</td> <td id="T_0910c_row0_col8" class="data row0 col8" >0.117152</td> <td id="T_0910c_row0_col9" class="data row0 col9" >0.163150</td> <td id="T_0910c_row0_col10" class="data row0 col10" >1.191099</td> <td id="T_0910c_row0_col11" class="data row0 col11" >1.510068</td> </tr> <tr> <th id="T_0910c_level0_row1" class="row_heading level0 row1" >Tamil</th> <td id="T_0910c_row1_col0" class="data row1 col0" >0.271279</td> <td id="T_0910c_row1_col1" class="data row1 col1" >0.327081</td> <td id="T_0910c_row1_col2" class="data row1 col2" >0.344081</td> <td id="T_0910c_row1_col3" class="data row1 col3" >0.441483</td> <td id="T_0910c_row1_col4" class="data row1 col4" >0.475225</td> <td id="T_0910c_row1_col5" class="data row1 col5" >0.492336</td> <td id="T_0910c_row1_col6" class="data row1 col6" >0.561315</td> <td id="T_0910c_row1_col7" class="data row1 col7" >1.024916</td> <td id="T_0910c_row1_col8" class="data row1 col8" >2.325402</td> <td id="T_0910c_row1_col9" class="data row1 col9" >1.315143</td> <td id="T_0910c_row1_col10" class="data row1 col10" >1.306694</td> <td id="T_0910c_row1_col11" class="data row1 col11" >1.876722</td> </tr> <tr> <th id="T_0910c_level0_row2" class="row_heading level0 row2" >Singlish</th> <td id="T_0910c_row2_col0" class="data row2 col0" >0.129830</td> <td id="T_0910c_row2_col1" class="data row2 col1" >0.168813</td> <td id="T_0910c_row2_col2" class="data row2 col2" >0.180395</td> <td id="T_0910c_row2_col3" class="data row2 col3" >0.248945</td> <td id="T_0910c_row2_col4" class="data row2 col4" >0.251608</td> <td id="T_0910c_row2_col5" class="data row2 col5" >0.255717</td> <td id="T_0910c_row2_col6" class="data row2 col6" >0.143800</td> <td id="T_0910c_row2_col7" class="data row2 col7" >0.439071</td> <td id="T_0910c_row2_col8" class="data row2 col8" >0.795990</td> <td id="T_0910c_row2_col9" class="data row2 col9" >0.389393</td> <td id="T_0910c_row2_col10" class="data row2 col10" >0.441490</td> <td id="T_0910c_row2_col11" class="data row2 col11" >0.448863</td> </tr> <tr> <th id="T_0910c_level0_row3" class="row_heading level0 row3" >Malay</th> <td id="T_0910c_row3_col0" class="data row3 col0" >0.194638</td> <td id="T_0910c_row3_col1" class="data row3 col1" >0.209074</td> <td id="T_0910c_row3_col2" class="data row3 col2" >0.279891</td> <td id="T_0910c_row3_col3" class="data row3 col3" >0.219692</td> <td id="T_0910c_row3_col4" class="data row3 col4" >0.311921</td> <td id="T_0910c_row3_col5" class="data row3 col5" >0.314378</td> <td id="T_0910c_row3_col6" class="data row3 col6" >0.289895</td> <td id="T_0910c_row3_col7" class="data row3 col7" >1.460664</td> <td id="T_0910c_row3_col8" class="data row3 col8" >0.765565</td> <td id="T_0910c_row3_col9" class="data row3 col9" >2.943750</td> <td id="T_0910c_row3_col10" class="data row3 col10" >1.085867</td> <td id="T_0910c_row3_col11" class="data row3 col11" >3.762933</td> </tr> <tr> <th id="T_0910c_level0_row4" class="row_heading level0 row4" >English</th> <td id="T_0910c_row4_col0" class="data row4 col0" >0.078544</td> <td id="T_0910c_row4_col1" class="data row4 col1" >0.088259</td> <td id="T_0910c_row4_col2" class="data row4 col2" >0.122295</td> <td id="T_0910c_row4_col3" class="data row4 col3" >0.080841</td> <td id="T_0910c_row4_col4" class="data row4 col4" >0.081568</td> <td id="T_0910c_row4_col5" class="data row4 col5" >0.104830</td> <td id="T_0910c_row4_col6" class="data row4 col6" >0.110567</td> <td id="T_0910c_row4_col7" class="data row4 col7" >0.134216</td> <td id="T_0910c_row4_col8" class="data row4 col8" >0.197824</td> <td id="T_0910c_row4_col9" class="data row4 col9" >0.110353</td> <td id="T_0910c_row4_col10" class="data row4 col10" >0.191492</td> <td id="T_0910c_row4_col11" class="data row4 col11" >0.098225</td> </tr> <tr> <th id="T_0910c_level0_row5" class="row_heading level0 row5" >Indonesian</th> <td id="T_0910c_row5_col0" class="data row5 col0" >0.121020</td> <td id="T_0910c_row5_col1" class="data row5 col1" >0.142813</td> <td id="T_0910c_row5_col2" class="data row5 col2" >0.131950</td> <td id="T_0910c_row5_col3" class="data row5 col3" >0.137102</td> <td id="T_0910c_row5_col4" class="data row5 col4" >0.135390</td> <td id="T_0910c_row5_col5" class="data row5 col5" >0.159476</td> <td id="T_0910c_row5_col6" class="data row5 col6" >0.298365</td> <td id="T_0910c_row5_col7" class="data row5 col7" >0.168659</td> <td id="T_0910c_row5_col8" class="data row5 col8" >0.220227</td> <td id="T_0910c_row5_col9" class="data row5 col9" >0.205216</td> <td id="T_0910c_row5_col10" class="data row5 col10" >1.653502</td> <td id="T_0910c_row5_col11" class="data row5 col11" >3.565510</td> </tr> <tr> <th id="T_0910c_level0_row6" class="row_heading level0 row6" >Mandarian</th> <td id="T_0910c_row6_col0" class="data row6 col0" >0.103694</td> <td id="T_0910c_row6_col1" class="data row6 col1" >0.132025</td> <td id="T_0910c_row6_col2" class="data row6 col2" >0.145878</td> <td id="T_0910c_row6_col3" class="data row6 col3" >0.170980</td> <td id="T_0910c_row6_col4" class="data row6 col4" >0.196867</td> <td id="T_0910c_row6_col5" class="data row6 col5" >0.291733</td> <td id="T_0910c_row6_col6" class="data row6 col6" >0.291183</td> <td id="T_0910c_row6_col7" class="data row6 col7" >0.102419</td> <td id="T_0910c_row6_col8" class="data row6 col8" >0.309782</td> <td id="T_0910c_row6_col9" class="data row6 col9" >0.130429</td> <td id="T_0910c_row6_col10" class="data row6 col10" >0.939545</td> <td id="T_0910c_row6_col11" class="data row6 col11" >0.238879</td> </tr> <tr> <th id="T_0910c_level0_row7" class="row_heading level0 row7" >Vietnamese</th> <td id="T_0910c_row7_col0" class="data row7 col0" >0.118693</td> <td id="T_0910c_row7_col1" class="data row7 col1" >0.134808</td> <td id="T_0910c_row7_col2" class="data row7 col2" >0.155110</td> <td id="T_0910c_row7_col3" class="data row7 col3" >0.148474</td> <td id="T_0910c_row7_col4" class="data row7 col4" >0.136075</td> <td id="T_0910c_row7_col5" class="data row7 col5" >0.164078</td> <td id="T_0910c_row7_col6" class="data row7 col6" >0.952040</td> <td id="T_0910c_row7_col7" class="data row7 col7" >0.205491</td> <td id="T_0910c_row7_col8" class="data row7 col8" >0.222001</td> <td id="T_0910c_row7_col9" class="data row7 col9" >0.186786</td> <td id="T_0910c_row7_col10" class="data row7 col10" >1.521174</td> <td id="T_0910c_row7_col11" class="data row7 col11" >1.805643</td> </tr> <tr> <th id="T_0910c_level0_row8" class="row_heading level0 row8" >Private Dataset</th> <td id="T_0910c_row8_col0" class="data row8 col0" >0.106150</td> <td id="T_0910c_row8_col1" class="data row8 col1" >0.112360</td> <td id="T_0910c_row8_col2" class="data row8 col2" >0.147258</td> <td id="T_0910c_row8_col3" class="data row8 col3" >0.116630</td> <td id="T_0910c_row8_col4" class="data row8 col4" >0.118434</td> <td id="T_0910c_row8_col5" class="data row8 col5" >0.143812</td> <td id="T_0910c_row8_col6" class="data row8 col6" >0.130667</td> <td id="T_0910c_row8_col7" class="data row8 col7" >0.222770</td> <td id="T_0910c_row8_col8" class="data row8 col8" >0.496540</td> <td id="T_0910c_row8_col9" class="data row8 col9" >0.164556</td> <td id="T_0910c_row8_col10" class="data row8 col10" >0.273304</td> <td id="T_0910c_row8_col11" class="data row8 col11" >0.229450</td> </tr> </tbody> </table> **Better Instruction Following and Audio Understanding** **MERaLiON-2-10B** exhibits substantial advancements in speech and audio understanding, as well as paralinguistic tasks. Notably, it adeptly handles complex instructions and responds with enhanced flexibility, effectively preserving the pre-trained knowledge from Gemma during the audio fine-tuning process. This capability enables MERaLiON-2-10B to provide detailed explanations regarding speech content and the speaker's emotional state. Furthermore, with appropriate prompt adjustments, the model can assume various roles, such as a voice assistant, virtual caregiver, or an integral component of sophisticated multi-agent AI systems and software solutions. Please visit [AudioBench benchmark](https://huggingface.co/spaces/MERaLiON/AudioBench-Leaderboard) for dataset-level evaluation results. <style type="text/css"> #T_b6ba8 th { text-align: center; } #T_b6ba8_row0_col0, #T_b6ba8_row2_col0, #T_b6ba8_row3_col0, #T_b6ba8_row5_col0, #T_b6ba8_row6_col0, #T_b6ba8_row8_col0, #T_b6ba8_row9_col0, #T_b6ba8_row10_col0 { text-align: center; } #T_b6ba8_row0_col1, #T_b6ba8_row0_col2, #T_b6ba8_row0_col3, #T_b6ba8_row0_col4, #T_b6ba8_row0_col5, #T_b6ba8_row0_col6, #T_b6ba8_row0_col7, #T_b6ba8_row0_col8, #T_b6ba8_row0_col9, #T_b6ba8_row0_col11, #T_b6ba8_row0_col12, #T_b6ba8_row0_col13, #T_b6ba8_row1_col1, #T_b6ba8_row1_col2, #T_b6ba8_row1_col3, #T_b6ba8_row1_col4, #T_b6ba8_row1_col5, #T_b6ba8_row1_col6, #T_b6ba8_row1_col7, #T_b6ba8_row1_col8, #T_b6ba8_row1_col9, #T_b6ba8_row1_col10, #T_b6ba8_row1_col11, #T_b6ba8_row1_col12, #T_b6ba8_row1_col13, #T_b6ba8_row2_col2, #T_b6ba8_row2_col3, #T_b6ba8_row2_col4, #T_b6ba8_row2_col5, #T_b6ba8_row2_col6, #T_b6ba8_row2_col7, #T_b6ba8_row2_col8, #T_b6ba8_row2_col9, #T_b6ba8_row2_col10, #T_b6ba8_row2_col11, #T_b6ba8_row2_col12, #T_b6ba8_row2_col13, #T_b6ba8_row3_col1, #T_b6ba8_row3_col3, #T_b6ba8_row3_col4, #T_b6ba8_row3_col5, #T_b6ba8_row3_col6, #T_b6ba8_row3_col7, #T_b6ba8_row3_col8, #T_b6ba8_row3_col9, #T_b6ba8_row3_col10, #T_b6ba8_row3_col11, #T_b6ba8_row3_col12, #T_b6ba8_row3_col13, #T_b6ba8_row4_col1, #T_b6ba8_row4_col2, #T_b6ba8_row4_col3, #T_b6ba8_row4_col4, #T_b6ba8_row4_col5, #T_b6ba8_row4_col6, #T_b6ba8_row4_col7, #T_b6ba8_row4_col8, #T_b6ba8_row4_col9, #T_b6ba8_row4_col10, #T_b6ba8_row4_col11, #T_b6ba8_row4_col12, #T_b6ba8_row4_col13, #T_b6ba8_row5_col1, #T_b6ba8_row5_col2, #T_b6ba8_row5_col3, #T_b6ba8_row5_col5, #T_b6ba8_row5_col6, #T_b6ba8_row5_col7, #T_b6ba8_row5_col8, #T_b6ba8_row5_col9, #T_b6ba8_row5_col10, #T_b6ba8_row5_col11, #T_b6ba8_row5_col12, #T_b6ba8_row5_col13, #T_b6ba8_row6_col1, #T_b6ba8_row6_col3, #T_b6ba8_row6_col4, #T_b6ba8_row6_col5, #T_b6ba8_row6_col6, #T_b6ba8_row6_col7, #T_b6ba8_row6_col8, #T_b6ba8_row6_col9, #T_b6ba8_row6_col10, #T_b6ba8_row6_col11, #T_b6ba8_row6_col12, #T_b6ba8_row6_col13, #T_b6ba8_row7_col1, #T_b6ba8_row7_col2, #T_b6ba8_row7_col3, #T_b6ba8_row7_col4, #T_b6ba8_row7_col5, #T_b6ba8_row7_col6, #T_b6ba8_row7_col7, #T_b6ba8_row7_col8, #T_b6ba8_row7_col9, #T_b6ba8_row7_col10, #T_b6ba8_row7_col11, #T_b6ba8_row7_col12, #T_b6ba8_row7_col13, #T_b6ba8_row8_col1, #T_b6ba8_row8_col2, #T_b6ba8_row8_col3, #T_b6ba8_row8_col4, #T_b6ba8_row8_col6, #T_b6ba8_row8_col7, #T_b6ba8_row8_col8, #T_b6ba8_row8_col9, #T_b6ba8_row8_col10, #T_b6ba8_row8_col11, #T_b6ba8_row8_col12, #T_b6ba8_row8_col13, #T_b6ba8_row9_col1, #T_b6ba8_row9_col2, #T_b6ba8_row9_col4, #T_b6ba8_row9_col5, #T_b6ba8_row9_col6, #T_b6ba8_row9_col7, #T_b6ba8_row9_col8, #T_b6ba8_row9_col9, #T_b6ba8_row9_col10, #T_b6ba8_row9_col11, #T_b6ba8_row9_col12, #T_b6ba8_row9_col13, #T_b6ba8_row10_col1, #T_b6ba8_row10_col3, #T_b6ba8_row10_col4, #T_b6ba8_row10_col5, #T_b6ba8_row10_col6, #T_b6ba8_row10_col7, #T_b6ba8_row10_col8, #T_b6ba8_row10_col9, #T_b6ba8_row10_col10, #T_b6ba8_row10_col11, #T_b6ba8_row10_col12, #T_b6ba8_row10_col13 { text-align: center; } #T_b6ba8_row0_col10, #T_b6ba8_row2_col1, #T_b6ba8_row3_col2, #T_b6ba8_row5_col4, #T_b6ba8_row6_col2, #T_b6ba8_row8_col5, #T_b6ba8_row9_col3, #T_b6ba8_row10_col2 { font-weight: bold; text-decoration: underline; text-align: center; } #T_b6ba8_row1_col0, #T_b6ba8_row4_col0, #T_b6ba8_row7_col0 { font-weight: bold; text-decoration: underline; text-align: center; } </style> <table id="T_b6ba8"> <thead> <tr> <th class="blank level0" >&nbsp;</th> <th id="T_b6ba8_level0_col0" class="col_heading level0 col0" >MERaLiON-2-10B</th> <th id="T_b6ba8_level0_col1" class="col_heading level0 col1" >MERaLiON-AudioLLM-Whisper-SEA-LION</th> <th id="T_b6ba8_level0_col2" class="col_heading level0 col2" >MERaLiON-2-10B-ASR</th> <th id="T_b6ba8_level0_col3" class="col_heading level0 col3" >MERaLiON-2-3B</th> <th id="T_b6ba8_level0_col4" class="col_heading level0 col4" >SeaLLMs-Audio-7B</th> <th id="T_b6ba8_level0_col5" class="col_heading level0 col5" >Qwen2-Audio-7B-Instruct</th> <th id="T_b6ba8_level0_col6" class="col_heading level0 col6" >Qwen2.5-Omni-3B</th> <th id="T_b6ba8_level0_col7" class="col_heading level0 col7" >phi_4_multimodal_instruct</th> <th id="T_b6ba8_level0_col8" class="col_heading level0 col8" >cascade-whisper_large_v3-llama_3_8b_instruct</th> <th id="T_b6ba8_level0_col9" class="col_heading level0 col9" >Qwen2.5-Omni-7B</th> <th id="T_b6ba8_level0_col10" class="col_heading level0 col10" >cascade-whisper_large_v2-gemma2_9b_cpt-sea_lionv3_instruct</th> <th id="T_b6ba8_level0_col11" class="col_heading level0 col11" >Qwen-Audio-Chat</th> <th id="T_b6ba8_level0_col12" class="col_heading level0 col12" >SALMONN_7B</th> <th id="T_b6ba8_level0_col13" class="col_heading level0 col13" >WavLLM_fairseq</th> </tr> </thead> <tbody> <tr> <th id="T_b6ba8_level0_row0" class="row_heading level0 row0" >Speech Instruction</th> <td id="T_b6ba8_row0_col0" class="data row0 col0" >70.200000</td> <td id="T_b6ba8_row0_col1" class="data row0 col1" >70.800000</td> <td id="T_b6ba8_row0_col2" class="data row0 col2" >13.400000</td> <td id="T_b6ba8_row0_col3" class="data row0 col3" >19.100000</td> <td id="T_b6ba8_row0_col4" class="data row0 col4" >66.900000</td> <td id="T_b6ba8_row0_col5" class="data row0 col5" >48.700000</td> <td id="T_b6ba8_row0_col6" class="data row0 col6" >65.000000</td> <td id="T_b6ba8_row0_col7" class="data row0 col7" >36.200000</td> <td id="T_b6ba8_row0_col8" class="data row0 col8" >66.100000</td> <td id="T_b6ba8_row0_col9" class="data row0 col9" >58.300000</td> <td id="T_b6ba8_row0_col10" class="data row0 col10" >72.900000</td> <td id="T_b6ba8_row0_col11" class="data row0 col11" >10.200000</td> <td id="T_b6ba8_row0_col12" class="data row0 col12" >12.900000</td> <td id="T_b6ba8_row0_col13" class="data row0 col13" >20.400000</td> </tr> <tr> <th id="T_b6ba8_level0_row1" class="row_heading level0 row1" >Emotion Recognition</th> <td id="T_b6ba8_row1_col0" class="data row1 col0" >63.736268</td> <td id="T_b6ba8_row1_col1" class="data row1 col1" >48.577313</td> <td id="T_b6ba8_row1_col2" class="data row1 col2" >53.693298</td> <td id="T_b6ba8_row1_col3" class="data row1 col3" >54.040797</td> <td id="T_b6ba8_row1_col4" class="data row1 col4" >52.007576</td> <td id="T_b6ba8_row1_col5" class="data row1 col5" >49.846540</td> <td id="T_b6ba8_row1_col6" class="data row1 col6" >33.037836</td> <td id="T_b6ba8_row1_col7" class="data row1 col7" >40.677800</td> <td id="T_b6ba8_row1_col8" class="data row1 col8" >50.937578</td> <td id="T_b6ba8_row1_col9" class="data row1 col9" >31.469397</td> <td id="T_b6ba8_row1_col10" class="data row1 col10" >48.214969</td> <td id="T_b6ba8_row1_col11" class="data row1 col11" >41.671551</td> <td id="T_b6ba8_row1_col12" class="data row1 col12" >33.584869</td> <td id="T_b6ba8_row1_col13" class="data row1 col13" >50.801545</td> </tr> <tr> <th id="T_b6ba8_level0_row2" class="row_heading level0 row2" >Audio Scene Question Answering</th> <td id="T_b6ba8_row2_col0" class="data row2 col0" >51.140374</td> <td id="T_b6ba8_row2_col1" class="data row2 col1" >52.207756</td> <td id="T_b6ba8_row2_col2" class="data row2 col2" >49.511886</td> <td id="T_b6ba8_row2_col3" class="data row2 col3" >46.141353</td> <td id="T_b6ba8_row2_col4" class="data row2 col4" >50.193739</td> <td id="T_b6ba8_row2_col5" class="data row2 col5" >47.048025</td> <td id="T_b6ba8_row2_col6" class="data row2 col6" >48.123228</td> <td id="T_b6ba8_row2_col7" class="data row2 col7" >42.217143</td> <td id="T_b6ba8_row2_col8" class="data row2 col8" >21.876943</td> <td id="T_b6ba8_row2_col9" class="data row2 col9" >45.669153</td> <td id="T_b6ba8_row2_col10" class="data row2 col10" >18.043681</td> <td id="T_b6ba8_row2_col11" class="data row2 col11" >51.618622</td> <td id="T_b6ba8_row2_col12" class="data row2 col12" >51.816958</td> <td id="T_b6ba8_row2_col13" class="data row2 col13" >33.034083</td> </tr> <tr> <th id="T_b6ba8_level0_row3" class="row_heading level0 row3" >Gender Recognition</th> <td id="T_b6ba8_row3_col0" class="data row3 col0" >95.109423</td> <td id="T_b6ba8_row3_col1" class="data row3 col1" >97.177396</td> <td id="T_b6ba8_row3_col2" class="data row3 col2" >97.220335</td> <td id="T_b6ba8_row3_col3" class="data row3 col3" >93.810266</td> <td id="T_b6ba8_row3_col4" class="data row3 col4" >75.449392</td> <td id="T_b6ba8_row3_col5" class="data row3 col5" >95.963266</td> <td id="T_b6ba8_row3_col6" class="data row3 col6" >47.867210</td> <td id="T_b6ba8_row3_col7" class="data row3 col7" >70.718047</td> <td id="T_b6ba8_row3_col8" class="data row3 col8" >57.039409</td> <td id="T_b6ba8_row3_col9" class="data row3 col9" >48.724711</td> <td id="T_b6ba8_row3_col10" class="data row3 col10" >19.421130</td> <td id="T_b6ba8_row3_col11" class="data row3 col11" >60.349349</td> <td id="T_b6ba8_row3_col12" class="data row3 col12" >84.365092</td> <td id="T_b6ba8_row3_col13" class="data row3 col13" >60.773275</td> </tr> <tr> <th id="T_b6ba8_level0_row4" class="row_heading level0 row4" >Spoken QA (Singlish)</th> <td id="T_b6ba8_row4_col0" class="data row4 col0" >66.550000</td> <td id="T_b6ba8_row4_col1" class="data row4 col1" >58.900000</td> <td id="T_b6ba8_row4_col2" class="data row4 col2" >61.850000</td> <td id="T_b6ba8_row4_col3" class="data row4 col3" >59.700000</td> <td id="T_b6ba8_row4_col4" class="data row4 col4" >51.350000</td> <td id="T_b6ba8_row4_col5" class="data row4 col5" >46.700000</td> <td id="T_b6ba8_row4_col6" class="data row4 col6" >60.500000</td> <td id="T_b6ba8_row4_col7" class="data row4 col7" >61.950000</td> <td id="T_b6ba8_row4_col8" class="data row4 col8" >59.350000</td> <td id="T_b6ba8_row4_col9" class="data row4 col9" >58.400000</td> <td id="T_b6ba8_row4_col10" class="data row4 col10" >53.750000</td> <td id="T_b6ba8_row4_col11" class="data row4 col11" >42.300000</td> <td id="T_b6ba8_row4_col12" class="data row4 col12" >43.200000</td> <td id="T_b6ba8_row4_col13" class="data row4 col13" >51.200000</td> </tr> <tr> <th id="T_b6ba8_level0_row5" class="row_heading level0 row5" >Audio Captioning</th> <td id="T_b6ba8_row5_col0" class="data row5 col0" >35.604270</td> <td id="T_b6ba8_row5_col1" class="data row5 col1" >36.976419</td> <td id="T_b6ba8_row5_col2" class="data row5 col2" >34.466710</td> <td id="T_b6ba8_row5_col3" class="data row5 col3" >33.243839</td> <td id="T_b6ba8_row5_col4" class="data row5 col4" >45.089372</td> <td id="T_b6ba8_row5_col5" class="data row5 col5" >37.278810</td> <td id="T_b6ba8_row5_col6" class="data row5 col6" >39.200328</td> <td id="T_b6ba8_row5_col7" class="data row5 col7" >30.832409</td> <td id="T_b6ba8_row5_col8" class="data row5 col8" >2.915778</td> <td id="T_b6ba8_row5_col9" class="data row5 col9" >31.896243</td> <td id="T_b6ba8_row5_col10" class="data row5 col10" >3.140568</td> <td id="T_b6ba8_row5_col11" class="data row5 col11" >39.988663</td> <td id="T_b6ba8_row5_col12" class="data row5 col12" >28.880570</td> <td id="T_b6ba8_row5_col13" class="data row5 col13" >6.200867</td> </tr> <tr> <th id="T_b6ba8_level0_row6" class="row_heading level0 row6" >Spoken Dialogue Summarisation</th> <td id="T_b6ba8_row6_col0" class="data row6 col0" >53.100000</td> <td id="T_b6ba8_row6_col1" class="data row6 col1" >53.600000</td> <td id="T_b6ba8_row6_col2" class="data row6 col2" >55.800000</td> <td id="T_b6ba8_row6_col3" class="data row6 col3" >48.550000</td> <td id="T_b6ba8_row6_col4" class="data row6 col4" >45.450000</td> <td id="T_b6ba8_row6_col5" class="data row6 col5" >36.300000</td> <td id="T_b6ba8_row6_col6" class="data row6 col6" >46.750000</td> <td id="T_b6ba8_row6_col7" class="data row6 col7" >50.750000</td> <td id="T_b6ba8_row6_col8" class="data row6 col8" >45.850000</td> <td id="T_b6ba8_row6_col9" class="data row6 col9" >43.150000</td> <td id="T_b6ba8_row6_col10" class="data row6 col10" >51.000000</td> <td id="T_b6ba8_row6_col11" class="data row6 col11" >25.250000</td> <td id="T_b6ba8_row6_col12" class="data row6 col12" >14.400000</td> <td id="T_b6ba8_row6_col13" class="data row6 col13" >39.450000</td> </tr> <tr> <th id="T_b6ba8_level0_row7" class="row_heading level0 row7" >Spoken QA (English)</th> <td id="T_b6ba8_row7_col0" class="data row7 col0" >79.735049</td> <td id="T_b6ba8_row7_col1" class="data row7 col1" >63.711481</td> <td id="T_b6ba8_row7_col2" class="data row7 col2" >73.975834</td> <td id="T_b6ba8_row7_col3" class="data row7 col3" >68.715179</td> <td id="T_b6ba8_row7_col4" class="data row7 col4" >70.920519</td> <td id="T_b6ba8_row7_col5" class="data row7 col5" >68.888565</td> <td id="T_b6ba8_row7_col6" class="data row7 col6" >67.818546</td> <td id="T_b6ba8_row7_col7" class="data row7 col7" >75.513152</td> <td id="T_b6ba8_row7_col8" class="data row7 col8" >78.526569</td> <td id="T_b6ba8_row7_col9" class="data row7 col9" >68.415131</td> <td id="T_b6ba8_row7_col10" class="data row7 col10" >67.814538</td> <td id="T_b6ba8_row7_col11" class="data row7 col11" >66.069047</td> <td id="T_b6ba8_row7_col12" class="data row7 col12" >60.649071</td> <td id="T_b6ba8_row7_col13" class="data row7 col13" >70.595242</td> </tr> <tr> <th id="T_b6ba8_level0_row8" class="row_heading level0 row8" >Music Understanding</th> <td id="T_b6ba8_row8_col0" class="data row8 col0" >63.942713</td> <td id="T_b6ba8_row8_col1" class="data row8 col1" >51.347936</td> <td id="T_b6ba8_row8_col2" class="data row8 col2" >60.657119</td> <td id="T_b6ba8_row8_col3" class="data row8 col3" >55.602359</td> <td id="T_b6ba8_row8_col4" class="data row8 col4" >63.689975</td> <td id="T_b6ba8_row8_col5" class="data row8 col5" >71.609099</td> <td id="T_b6ba8_row8_col6" class="data row8 col6" >59.309183</td> <td id="T_b6ba8_row8_col7" class="data row8 col7" >55.265375</td> <td id="T_b6ba8_row8_col8" class="data row8 col8" >56.697557</td> <td id="T_b6ba8_row8_col9" class="data row8 col9" >47.598989</td> <td id="T_b6ba8_row8_col10" class="data row8 col10" >50.463353</td> <td id="T_b6ba8_row8_col11" class="data row8 col11" >59.056445</td> <td id="T_b6ba8_row8_col12" class="data row8 col12" >49.705139</td> <td id="T_b6ba8_row8_col13" class="data row8 col13" >44.313395</td> </tr> <tr> <th id="T_b6ba8_level0_row9" class="row_heading level0 row9" >Accent Recognition</th> <td id="T_b6ba8_row9_col0" class="data row9 col0" >41.815396</td> <td id="T_b6ba8_row9_col1" class="data row9 col1" >43.799799</td> <td id="T_b6ba8_row9_col2" class="data row9 col2" >47.788864</td> <td id="T_b6ba8_row9_col3" class="data row9 col3" >60.054981</td> <td id="T_b6ba8_row9_col4" class="data row9 col4" >10.143836</td> <td id="T_b6ba8_row9_col5" class="data row9 col5" >10.901397</td> <td id="T_b6ba8_row9_col6" class="data row9 col6" >0.478694</td> <td id="T_b6ba8_row9_col7" class="data row9 col7" >3.097615</td> <td id="T_b6ba8_row9_col8" class="data row9 col8" >21.398482</td> <td id="T_b6ba8_row9_col9" class="data row9 col9" >0.587293</td> <td id="T_b6ba8_row9_col10" class="data row9 col10" >25.929693</td> <td id="T_b6ba8_row9_col11" class="data row9 col11" >17.550294</td> <td id="T_b6ba8_row9_col12" class="data row9 col12" >11.577381</td> <td id="T_b6ba8_row9_col13" class="data row9 col13" >14.294613</td> </tr> <tr> <th id="T_b6ba8_level0_row10" class="row_heading level0 row10" >Speech Translation</th> <td id="T_b6ba8_row10_col0" class="data row10 col0" >27.391115</td> <td id="T_b6ba8_row10_col1" class="data row10 col1" >27.086366</td> <td id="T_b6ba8_row10_col2" class="data row10 col2" >28.540359</td> <td id="T_b6ba8_row10_col3" class="data row10 col3" >22.130258</td> <td id="T_b6ba8_row10_col4" class="data row10 col4" >21.143215</td> <td id="T_b6ba8_row10_col5" class="data row10 col5" >10.826666</td> <td id="T_b6ba8_row10_col6" class="data row10 col6" >21.776628</td> <td id="T_b6ba8_row10_col7" class="data row10 col7" >13.827110</td> <td id="T_b6ba8_row10_col8" class="data row10 col8" >13.536272</td> <td id="T_b6ba8_row10_col9" class="data row10 col9" >20.688241</td> <td id="T_b6ba8_row10_col10" class="data row10 col10" >21.437997</td> <td id="T_b6ba8_row10_col11" class="data row10 col11" >4.973184</td> <td id="T_b6ba8_row10_col12" class="data row10 col12" >13.486003</td> <td id="T_b6ba8_row10_col13" class="data row10 col13" >9.046791</td> </tr> </tbody> </table> ## How to Use > [!WARNING] > **Out of Scope use**: This model is not intended for use in tool calling, math, and coding tasks. MERaLiON-2 requires `transformers` version `4.50.1` ``` pip install transformers==4.50.1 pip install librosa ``` To run in GPU, MERaLiON-2 requires `flash-attn`. ``` pip install flash-attn --no-build-isolation ``` > [!TIP] > Should you face any difficulties installing the above packages, you can try installing within this Docker container instead: > `pytorch/pytorch:2.5.1-cuda12.1-cudnn9-devel`, whose cuda and torch environments have been tested working. ### Audio Input - For ASR tasks, the maximum audio length is suggested to be 30 seconds at 16,000 Hz. - For general speech & audio understanding tasks, the maximum audio length is suggested to be 300 seconds at 16,000 Hz sampling rate. ### Text Prompt MERaLiON-2 is trained with this prompt template: ``` Instruction: <TextHere> \nFollow the text instruction based on the following audio: <SpeechHere> ``` It is generally recommended to follow this template, i.e., replace `<TextHere>` with your text instruction while leaving the `<SpeechHere>` untouched. We list a few useful example prompts here: **Standard prompts for better accuracy** ```python prompt_template = "Instruction: {query} \nFollow the text instruction based on the following audio: <SpeechHere>" transcription_prompt = prompt_template.format(query="Please transcribe the speech") translation_prompt = prompt_template.format(query="Please translate the speech into Malay") summarization_prompt = prompt_template.format(query="Please summarize this speech") audio_captioning_prompt_1 = prompt_template.format(query="Please describe the audio") audio_captioning_prompt_2 = prompt_template.format(query="Please create a caption for the audio") audio_scene_understanding_prompt = prompt_template.format(query="Is there people crying in the audio?") speech_as_instruction_prompt = prompt_template.format(query="Please respond to the audio") # given an speech instruction is provided in the audio clip. emotion_recognition_prompt_1 = prompt_template.format(query="What is the emotion of the speaker") emotion_recognition_prompt_2 = prompt_template.format(query="Describe the paralinguistics feature of the audio") gender_recognition_prompt = prompt_template.format(query="What is the gender of the speaker") ``` **More flexible prompts for enriched responses** ```python prompt_template = "Instruction: {query} \nFollow the text instruction based on the following audio: <SpeechHere>" prompt_1 = prompt_template.format(query="describe the paralinguistics feature and return in json format.") prompt_2 = prompt_template.format(query="Please summarise the content of the speech and analyse the paralinguistics features of this audio. Return in json format.") prompt_3 = prompt_template.format(query="Please translate this speech to Singapore's 4 official languages.") ``` **AI agent prompts (beyond the default prompt template)** ```python prompt_1 = \ """ Your are MERaLiON-AudioLLM, an empathic AI assistant developed by A*STAR. MERaLiON stands for Multimodal Empathetic Reasoning and Learning in One Network. You are a friendly and empathetic conversational partner, and is proficient in understanding human's emotion, accent, and gender from paralinguistic features. Maintain a tone that is warm, non-judgmental, and supportive while replying to user. User's voice: <SpeechHere> """ ``` ### Huggingface Inference with CPU ```python import librosa from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor repo_id = "MERaLiON/MERaLiON-2-10B" processor = AutoProcessor.from_pretrained( repo_id, trust_remote_code=True, ) model = AutoModelForSpeechSeq2Seq.from_pretrained( repo_id, use_safetensors=True, trust_remote_code=True, ) prompt_template = "Instruction: {query} \nFollow the text instruction based on the following audio: <SpeechHere>" transcribe_prompt = "Please transcribe this speech." translate_prompt = "Can you please translate this speech into written Chinese?" # batch inference of 2 samples conversation = [ [{"role": "user", "content": prompt_template.format(query=transcribe_prompt)}], [{"role": "user", "content": prompt_template.format(query=translate_prompt)}], ] chat_prompt = processor.tokenizer.apply_chat_template( conversation=conversation, tokenize=False, add_generation_prompt=True ) # Use audio at 16000hz. audio_array, sample_rate = librosa.load("/path/to/your/audio/file", sr=16000) audio_array = [audio_array]*2 inputs = processor(text=chat_prompt, audios=audio_array) # adjust the `max_new_tokens` based on your use case. outputs = model.generate(**inputs, max_new_tokens=256) generated_ids = outputs[:, inputs['input_ids'].size(1):] response = processor.batch_decode(generated_ids, skip_special_tokens=True) ``` ### Huggingface GPU Inference ```python import torch import librosa from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor repo_id = "MERaLiON/MERaLiON-2-10B" device = "cuda" processor = AutoProcessor.from_pretrained( repo_id, trust_remote_code=True, ) model = AutoModelForSpeechSeq2Seq.from_pretrained( repo_id, use_safetensors=True, trust_remote_code=True, attn_implementation="flash_attention_2", torch_dtype=torch.bfloat16 ).to(device) prompt_template = "Instruction: {query} \nFollow the text instruction based on the following audio: <SpeechHere>" transcribe_prompt = "Please transcribe this speech." translate_prompt = "Can you please translate this speech into written Chinese?" # batch inference of 2 samples conversation = [ [{"role": "user", "content": prompt_template.format(query=transcribe_prompt)}], [{"role": "user", "content": prompt_template.format(query=translate_prompt)}], ] chat_prompt = processor.tokenizer.apply_chat_template( conversation=conversation, tokenize=False, add_generation_prompt=True ) # Use audio at 16000hz. audio_array, sample_rate = librosa.load("/path/to/your/audio/file", sr=16000) audio_array = [audio_array]*2 inputs = processor(text=chat_prompt, audios=audio_array) for key, value in inputs.items(): if isinstance(value, torch.Tensor): inputs[key] = inputs[key].to(device) if value.dtype == torch.float32: inputs[key] = inputs[key].to(torch.bfloat16) # adjust the `max_new_tokens` based on your use case. outputs = model.generate(**inputs, max_new_tokens=256) generated_ids = outputs[:, inputs['input_ids'].size(1):] response = processor.batch_decode(generated_ids, skip_special_tokens=True) ``` ## โš ๏ธ Disclaimer The current MERaLiON-2 has not been specifically aligned for safety and may generate content that is inappropriate, offensive, or harmful. Developers and users are responsible for performing their own safety fine-tuning and implementing necessary security measures. The authors shall not be held liable for any claims, damages, or other liabilities arising from the use of the released models, weights, or code. ### Compute and Infrastructure MERaLiON-2 was trained on the [**ASPIRE 2A+**](https://help.nscc.sg/aspire2aplus/about/) Supercomputer Cluster, provided by [**National Supercomputing Centre (NSCC)**](https://www.nscc.sg/), Singapore. ASPIRE 2A+ cluster provides multiple H100 nodes, with each compute node equipped with 8 Nvidia H100 GPUs, 2 TB of RAM, and 30 TB of locally attached NVMe storage. These nodes are interconnected via a rail-optimised, full fat-tree topology, utilising 400 Gb/s NDR InfiniBand cables. Additionally, the cluster incorporates a 2.5 PB SSD-based Lustre file system, linked to the H100 nodes through high-speed InfiniBand connections. With a global batch size of 768, we trained the current release of MERaLiON-2 for around 200k steps, which took around 2 days to complete using 16 nodes, 128 H100 GPUs. ## ๐Ÿ“š Citation If you find our work useful, please cite our papers: [MERaLiON-AudioLLM: Bridging Audio and Language with Large Language Models](https://arxiv.org/abs/2412.09818) <br> [AudioBench: A Universal Benchmark for Audio Large Language Models](https://aclanthology.org/2025.naacl-long.218/) <br> [Advancing Singlish Understanding: Bridging the Gap with Datasets and Multimodal Models](https://arxiv.org/abs/2501.01034) <br> [MoWE-Audio: Multitask AudioLLMs with Mixture of Weak Encoders](https://arxiv.org/abs/2409.06635) <br> [MERaLiON-TextLLM: Cross-Lingual Understanding of Large Language Models in Chinese, Indonesian, Malay, and Singlish](https://arxiv.org/abs/2501.08335) <br> ``` @misc{he2024meralionaudiollmtechnicalreport, title={MERaLiON-AudioLLM: Bridging Audio and Language with Large Language Models}, author={{MERaLiON Team}}, year={2024}, eprint={2412.09818}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2412.09818}, } ``` ``` @article{wang2024audiobench, title={AudioBench: A Universal Benchmark for Audio Large Language Models}, author={Wang, Bin and Zou, Xunlong and Lin, Geyu and Sun, Shuo and Liu, Zhuohan and Zhang, Wenyu and Liu, Zhengyuan and Aw, AiTi and Chen, Nancy F}, journal={NAACL}, year={2025} } ``` ``` @article{wang2025advancing, title={Advancing Singlish Understanding: Bridging the Gap with Datasets and Multimodal Models}, author={Wang, Bin and Zou, Xunlong and Sun, Shuo and Zhang, Wenyu and He, Yingxu and Liu, Zhuohan and Wei, Chengwei and Chen, Nancy F and Aw, AiTi}, journal={arXiv preprint arXiv:2501.01034}, year={2025} } ``` ``` @article{zhang2024mowe, title={MoWE-Audio: Multitask AudioLLMs with Mixture of Weak Encoders}, author={Zhang, Wenyu and Sun, Shuo and Wang, Bin and Zou, Xunlong and Liu, Zhuohan and He, Yingxu and Lin, Geyu and Chen, Nancy F and Aw, Ai Ti}, journal={ICASSP}, year={2025} } ``` ``` @misc{huang2025meraliontextllmcrosslingualunderstandinglarge, title={MERaLiON-TextLLM: Cross-Lingual Understanding of Large Language Models in Chinese, Indonesian, Malay, and Singlish}, author={Xin Huang and Tarun Kumar Vangani and Minh Duc Pham and Xunlong Zou and Bin Wang and Zhengyuan Liu and Ai Ti Aw}, year={2025}, eprint={2501.08335}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.08335}, } ```
JesseLiu/llama32-1b-pagerank-partial-baseline-grpo-lora
JesseLiu
2025-06-05T10:53:20Z
7
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:adapter:meta-llama/Llama-3.2-1B-Instruct", "region:us" ]
null
2025-05-27T11:52:20Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
amanfor18/Shilpa
amanfor18
2025-06-05T10:53:16Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-schnell", "base_model:adapter:black-forest-labs/FLUX.1-schnell", "license:unknown", "region:us" ]
text-to-image
2025-06-05T10:53:11Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: ShilpaShetty output: url: >- images/98797550f0d27541412a075e229510e120d958b2c6d60201fe6393edcb166f50.webp base_model: black-forest-labs/FLUX.1-schnell instance_prompt: ShilpaShettyFlux license: unknown --- # Shilpa <Gallery /> ## Trigger words You should use `ShilpaShettyFlux` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/amanfor18/Shilpa/tree/main) them in the Files & versions tab.
cryptobros/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-endangered_burrowing_sealion
cryptobros
2025-06-05T10:52:54Z
1
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am endangered burrowing sealion", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-03T02:30:47Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-endangered_burrowing_sealion tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am endangered burrowing sealion - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-endangered_burrowing_sealion This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="cryptobros/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-endangered_burrowing_sealion", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.0 - Transformers: 4.52.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
JesseLiu/llama32-1b-kpath-partial-baseline-grpo-lora
JesseLiu
2025-06-05T10:52:42Z
9
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:adapter:meta-llama/Llama-3.2-1B-Instruct", "region:us" ]
null
2025-05-27T11:54:09Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
csala/ALIA-40b-Q3_K-GGUF
csala
2025-06-05T10:52:09Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2025-06-05T10:21:22Z
![image.png](https://huggingface.co/BSC-LT/ALIA-40b/resolve/main/images/logo_alia_2.png) # ALIA-40b in GGUF format and quantized to `Q3_K` ALIA-40B is a 40B parameter base language model developed by the Barcelona Supercomputing Center (BSC). Original model and details here: https://huggingface.co/BSC-LT/ALIA-40b This model is released under a permissive [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0). Along with the open weights, all training scripts and configuration files are made publicly available in [this GitHub repository](https://github.com/langtech-bsc/alia). This repository contains the model in GGUF format and afterwards quantized to `Q3_K` level using `llama.cpp`. --- ## Model Details ### Description Transformer-based decoder-only language model that has been pre-trained from scratch on 9.37 trillion tokens of highly curated data. The pre-training corpus contains text in 35 European languages and code. ### Hyperparameters The full list of hyperparameters can be found [here](https://github.com/langtech-bsc/alia/blob/main/configs). ### Architecture | | | |-------------------------|:--------------| | Total Parameters | 40,433,885,184| | Embedding Parameters | 2,097,152,000 | | Layers | 48 | | Hidden size | 8,192 | | Attention heads | 64 | | Context length | 32,768 | | Vocabulary size | 256,000 | | Precision | bfloat16 | | Embedding type | RoPE | | Activation Function | SwiGLU | | Layer normalization | RMS Norm | | Flash attention | โœ… | | Grouped Query Attention | โœ… | | Num. query groups | 8 | --- ## Conversion Process There are the steps that were followed to convert the weights to GGUF format and quantize. ### 1. Download from HuggingFace Requirement: [huggingface_hub](https://pypi.org/project/huggingface-hub/) ```bash huggingface-cli download --cache-dir . BSC-LT/ALIA-40b ``` This command downloads the model into the directory `./models--BSC-LT--ALIA-40b/` The safetensors files end up inside `./models--BSC-LT--ALIA-40b/snapshots/aa8a4ac7f9e18f3c2ea8ec0cc84e7783cd751ac7/`. ## 2. Convert Safetensors to GUFF without quantization using llama.cpp Requirement: [llama.cpp](https://github.com/ggml-org/llama.cpp) repository and python requirements installed. ```bash cd $LLAMA_PATH python convert_hf_to_gguf.py $ALIA_PATH/models--BSC-LT--ALIA-40b/snapshots/aa8a4ac7f9e18f3c2ea8ec0cc84e7783cd751ac7/ --outfile $ALIA_PATH/ALIA-40B.gguf ``` `LLAMA_PATH` is the root of the llama.cpp directory. `ALIA_PATH` is the directory where we downloaded the Safetensors weights and where we want to store the ALIA-40B GGUF file. This creates the file `$ALIA_PATH/ALIA-40B.gguf`. ## 3. Quantize the model Requirement: [llama.cpp](https://github.com/ggml-org/llama.cpp) built and installed. ```bash cd $ALIA_PATH llama-quantize ALIA-40B.gguf ALIA-40B.Q3_K.gguf Q3_K ``` This generates the file `ALIA-40B.Q3_K.gguf` within the same directory.
Galchonok/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-territorial_alert_nightingale
Galchonok
2025-06-05T10:51:56Z
36
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am territorial alert nightingale", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-29T21:21:42Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-territorial_alert_nightingale tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am territorial alert nightingale - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-territorial_alert_nightingale This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Galchonok/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-territorial_alert_nightingale", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/Chaos-Cydonia-24B-GGUF
mradermacher
2025-06-05T10:50:08Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "roleplay", "storywriting", "en", "base_model:Vortex5/Chaos-Cydonia-24B", "base_model:quantized:Vortex5/Chaos-Cydonia-24B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-05T09:50:28Z
--- base_model: Vortex5/Chaos-Cydonia-24B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge - roleplay - storywriting --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Vortex5/Chaos-Cydonia-24B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Chaos-Cydonia-24B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Chaos-Cydonia-24B-GGUF/resolve/main/Chaos-Cydonia-24B.Q2_K.gguf) | Q2_K | 9.0 | | | [GGUF](https://huggingface.co/mradermacher/Chaos-Cydonia-24B-GGUF/resolve/main/Chaos-Cydonia-24B.Q3_K_S.gguf) | Q3_K_S | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/Chaos-Cydonia-24B-GGUF/resolve/main/Chaos-Cydonia-24B.Q3_K_M.gguf) | Q3_K_M | 11.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Chaos-Cydonia-24B-GGUF/resolve/main/Chaos-Cydonia-24B.Q3_K_L.gguf) | Q3_K_L | 12.5 | | | [GGUF](https://huggingface.co/mradermacher/Chaos-Cydonia-24B-GGUF/resolve/main/Chaos-Cydonia-24B.Q4_K_S.gguf) | Q4_K_S | 13.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Chaos-Cydonia-24B-GGUF/resolve/main/Chaos-Cydonia-24B.Q4_K_M.gguf) | Q4_K_M | 14.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Chaos-Cydonia-24B-GGUF/resolve/main/Chaos-Cydonia-24B.Q5_K_S.gguf) | Q5_K_S | 16.4 | | | [GGUF](https://huggingface.co/mradermacher/Chaos-Cydonia-24B-GGUF/resolve/main/Chaos-Cydonia-24B.Q6_K.gguf) | Q6_K | 19.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Chaos-Cydonia-24B-GGUF/resolve/main/Chaos-Cydonia-24B.Q8_0.gguf) | Q8_0 | 25.2 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
elsvastika/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-graceful_wary_orangutan
elsvastika
2025-06-05T10:50:02Z
39
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am graceful wary orangutan", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-05T17:26:46Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-graceful_wary_orangutan tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am graceful wary orangutan - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-graceful_wary_orangutan This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="elsvastika/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-graceful_wary_orangutan", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
gradientrouting-spar/2d_data_test_20250605_101448
gradientrouting-spar
2025-06-05T10:49:47Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-05T10:47:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RikoteMaster/open_math_model_mcqa_lora
RikoteMaster
2025-06-05T10:49:41Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-05T08:46:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
viols/MNLP_M2_document_encoder
viols
2025-06-05T10:49:36Z
0
0
transformers
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2025-05-29T20:28:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Wizard0504/dpo-mcqa-finetuned6
Wizard0504
2025-06-05T10:49:05Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-05T10:47:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Luxenburger/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prowling_stealthy_grouse
Luxenburger
2025-06-05T10:48:59Z
40
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am prowling stealthy grouse", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T04:30:57Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prowling_stealthy_grouse tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am prowling stealthy grouse - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prowling_stealthy_grouse This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Luxenburger/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prowling_stealthy_grouse", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
asazheng/MCQA_model_2epoch
asazheng
2025-06-05T10:48:53Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Qwen3-0.6B-Base", "base_model:finetune:unsloth/Qwen3-0.6B-Base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-05T10:48:32Z
--- base_model: unsloth/Qwen3-0.6B-Base tags: - text-generation-inference - transformers - unsloth - qwen3 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** asazheng - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-0.6B-Base This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ibuki95/vision_172_17
ibuki95
2025-06-05T10:48:32Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-06-05T10:47:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Diamantis99/SAGMv48
Diamantis99
2025-06-05T10:48:27Z
0
0
segmentation-models-pytorch
[ "segmentation-models-pytorch", "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "semantic-segmentation", "pytorch", "image-segmentation", "license:mit", "region:us" ]
image-segmentation
2025-06-05T10:48:06Z
--- library_name: segmentation-models-pytorch license: mit pipeline_tag: image-segmentation tags: - model_hub_mixin - pytorch_model_hub_mixin - segmentation-models-pytorch - semantic-segmentation - pytorch languages: - python --- # PAN Model Card Table of Contents: - [Load trained model](#load-trained-model) - [Model init parameters](#model-init-parameters) - [Model metrics](#model-metrics) - [Dataset](#dataset) ## Load trained model ```python import segmentation_models_pytorch as smp model = smp.from_pretrained("<save-directory-or-this-repo>") ``` ## Model init parameters ```python model_init_params = { "encoder_name": "mit_b5", "encoder_depth": 5, "encoder_weights": "imagenet", "encoder_output_stride": 16, "decoder_channels": 32, "in_channels": 3, "classes": 1, "activation": None, "upsampling": 4, "aux_params": None } ``` ## Model metrics ```json [ { "test_per_image_iou": 0.8719276785850525, "test_dataset_iou": 0.8910403847694397 } ] ``` ## Dataset Dataset name: VisionPipe ## More Information - Library: https://github.com/qubvel/segmentation_models.pytorch - Docs: https://smp.readthedocs.io/en/latest/ This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin)
dhruvsangani/semantic-search
dhruvsangani
2025-06-05T10:47:33Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:223", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:sentence-transformers/all-MiniLM-L6-v2", "base_model:finetune:sentence-transformers/all-MiniLM-L6-v2", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-06-05T10:47:28Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:223 - loss:MultipleNegativesRankingLoss base_model: sentence-transformers/all-MiniLM-L6-v2 widget: - source_sentence: What is AI? sentences: - Mobile Phones have become an integral part of a person's life. He spends 80% of his mobile surfing time on Apps. Hence, with Enterprise Mobility, you can connect with your customers better, engage them, and build loyalty with them. - AI is that part of Computer Science which replicates the human intelligence and automates this behavior. By making use of artificial artifacts simulations, AI executes its software programs on a computer. - 1) Security Training 2) Data Privacy 3) Infrastructure 4) Security Audits - source_sentence: What is AI matrix in IRIS ? sentences: - AI Matrix to classify intent, sentiment & priority of query and assign automatically to available robot agents & human agents only in case of exceptions - You can identify which processes can be automated and which tool is best to implement RPA automation for your business. Feat helps you understand how RPA/Automation will increase tour business productivity and help in cost-cutting.' - 1) Basic 2) Standard 3) Professional 4) Enterprise - source_sentence: What will the price of the basic plan of IRIS? sentences: - It's free if the basic plan is chosen - RPA in Media Industry can be specifically used in Order Processing and Daily Report Processes. They can be used to analyze and interpret media insights and customer interests over time to provide unique and different news information to their audience. - Optical Character Recognition is one such tool that is used to extract text from images and documents via electronic or mechanical channels. It converts typed, printed, or handwritten data into machine-encoded text which can then be used by a company to process different applications. - source_sentence: Can Feat Systems help with compliance automation? sentences: - a) Windows server management b)Database management c) Storage management d) Network management - Yes, the Professional Plan includes standard ongoing support during business hours. This support covers assistance with system usage, basic troubleshooting, and guidance on best practices. - Yes, Feat Systems offers automation solutions that assist in regulatory compliance, particularly in industries like insurance. Their RPA services enable efficient data protection, accurate compliance management, and enhanced operational effectiveness, helping businesses navigate complex regulatory landscapes confidently. - source_sentence: What solutions & services do we have? sentences: - 'Feat Systems primarily provides solutions for businesses, not mostly for individual consumers. Their focus is on delivering Hyper-Intelligent Automation solutionsโ€”like Robotic Process Automation (RPA), Business Process Management (BPM), AI, and Data Securityโ€”to companies in industries such as: Banking and Finance, Insurance, Healthcare, Manufacturing, Supply Chain Management and more. We aim to help organizations streamline operations, reduce manual tasks, enhance compliance, and improve overall efficiency.' - PIGEON-IRIS is an omnichannel 'Intelligent Customer Query Response System' designed with 'customer first mentality' and important attributes of intelligence as well as better customer service in mind. - 1) PIGEON 2) IRIS 3) iVIPS 4) Automation Setu 5) Managed bot eco-system 6) Process assessment tool pipeline_tag: sentence-similarity library_name: sentence-transformers --- # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 384 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the ๐Ÿค— Hub model = SentenceTransformer("dhruvsangani/semantic-search") # Run inference sentences = [ 'What solutions & services do we have?', '1) PIGEON 2) IRIS 3) iVIPS 4) Automation Setu 5) Managed bot eco-system 6) Process assessment tool', 'Feat Systems primarily provides solutions for businesses, not mostly for individual consumers. Their focus is on delivering Hyper-Intelligent Automation solutionsโ€”like Robotic Process Automation (RPA), Business Process Management (BPM), AI, and Data Securityโ€”to companies in industries such as: Banking and Finance, Insurance, Healthcare, Manufacturing, Supply Chain Management and more. We aim to help organizations streamline operations, reduce manual tasks, enhance compliance, and improve overall efficiency.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 223 training samples * Columns: <code>sentence_0</code> and <code>sentence_1</code> * Approximate statistics based on the first 223 samples: | | sentence_0 | sentence_1 | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 11.91 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 42.32 tokens</li><li>max: 217 tokens</li></ul> | * Samples: | sentence_0 | sentence_1 | |:----------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>What are enterprise applications?</code> | <code>An enterprise application is various programs or software technology that is used by the business to assist the organization in solving enterprise problems.</code> | | <code>How can Feat help in Licensing my business?</code> | <code>Feat has an assessment mechanism that will help identify the required licenses to automate your business processes.</code> | | <code>Why should you implement PIGEON in your business ?</code> | <code>As per our recetnt analysis organizations are not thinking from end-to-end digital transformation lens, rather implementing spot solutions limited to simple RPA tasks and scripting automation whereas Pigeon is an end-to-end digital transformation solution, transforming full business process journeys starting from manual to digital to automation (digital transformation).</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 4 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Framework Versions - Python: 3.11.12 - Sentence Transformers: 4.1.0 - Transformers: 4.52.3 - PyTorch: 2.7.0+cu126 - Accelerate: 1.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
Ywhsheng/Llama3-TAIDE-LX-8B-Chat-Alpha1-ywhsheng
Ywhsheng
2025-06-05T10:47:15Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "base_model:taide/Llama3-TAIDE-LX-8B-Chat-Alpha1", "base_model:quantized:taide/Llama3-TAIDE-LX-8B-Chat-Alpha1", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-05T10:13:32Z
--- base_model: taide/Llama3-TAIDE-LX-8B-Chat-Alpha1 tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Ywhsheng - **License:** apache-2.0 - **Finetuned from model :** taide/Llama3-TAIDE-LX-8B-Chat-Alpha1 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
neyvre/xlm-roberta-base-finetuned-panx-de
neyvre
2025-06-05T10:47:04Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-05-31T08:25:24Z
--- library_name: transformers license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1362 - F1: 0.8666 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.257 | 1.0 | 525 | 0.1562 | 0.8212 | | 0.1271 | 2.0 | 1050 | 0.1379 | 0.8523 | | 0.0786 | 3.0 | 1575 | 0.1362 | 0.8666 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
QCRI/Fanar-1-9B
QCRI
2025-06-05T10:45:42Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "pytorch", "ar", "en", "arxiv:2501.13944", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-01T13:13:34Z
--- license: apache-2.0 language: - ar - en pipeline_tag: text-generation tags: - pytorch library_name: transformers --- <p align="center"> <img src="./fanar_logo.jpg" width="200"/> </p> # Fanar-1-9B **Fanar-1-9B** is a powerful Arabic-English LLM developed by [Qatar Computing Research Institute (QCRI)](https://www.hbku.edu.qa/en/qcri) at [Hamad Bin Khalifa University (HBKU)](https://www.hbku.edu.qa/), a member of Qatar Foundation for Education, Science, and Community Development. We continually pretrain the `google/gemma-2-9b` model on 1T Arabic and English tokens. We pay particular attention to the richness of the Arabic language by supporting Modern Standard Arabic (MSA) and a diverse set of Arabic dialects, including Gulf, Levantine, and Egyptian. Fanar models, through meticulous curation of the pretraining and instruction-tuning data, are aligned with Islamic values and Arab cultures. The [instruction-tuned version](https://huggingface.co/QCRI/Fanar-1-9B-Instruct) of **Fanar-1-9B** is a core component of the [Fanar GenAI platform](https://fanar.qa/) that offers a suite of capabilities including image generation, video and image understanding, deep thinking, advanced text-to-speech (TTS) and automatic-speech-recognition (ASR), attribution and fact-checking, Islamic RAG, among several other features. We have published a comprehensive [report](https://arxiv.org/pdf/2501.13944) with all the details regarding our Fanar GenAI platform. We also provide an API to our models and the GenAI platform (request access [here](https://api.fanar.qa/request/en)). --- ## Model Details | Attribute | Value | |---------------------------|------------------------------------| | Developed by | [QCRI](https://www.hbku.edu.qa/en/qcri) at [HBKU](https://www.hbku.edu.qa/) | | Sponsored by | [Ministry of Communications and Information Technology, State of Qatar](https://www.mcit.gov.qa/en/) | Model Type | Autoregressive Transformer | | Parameter Count | 8.7 Billion | | Context Length | 4096 Tokens | | Input | Text only | | Output | Text only | | Training Framework | [LitGPT](https://github.com/Lightning-AI/litgpt) | | Pretraining Token Count | 1 Trillion (ar + en) | | Languages | Arabic, English | | License | Apache 2.0 | <!-- | Precision | bfloat16 | --> --- ## Model Training #### Pretraining Fanar-1-9B was continually pretrained on 1T tokens, with a balanced focus on Arabic and English: ~515B English tokens from a carefully curated subset of the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset, 410B Arabic tokens that we collected, parsed, and filtered from a variety of sources, and 102B code tokens curated from [The Stack](https://github.com/bigcode-project/the-stack-v2) dataset. Our codebase used the [LitGPT](https://github.com/Lightning-AI/litgpt) framework. ## Getting Started Fanar-1-9B is compatible with the Hugging Face `transformers` library (โ‰ฅ v4.40.0). Here's how to load and use the model: ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "QCRI/Fanar-1-9B" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto") # prompt may be in Arabic or English prompt = "ู…ุง ู‡ูŠ ุนุงุตู…ุฉ ู‚ุทุฑุŸ" inputs = tokenizer(prompt, return_tensors="pt", return_token_type_ids=False) outputs = model.generate(**inputs, max_new_tokens=256) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` --- ## Intended Use Fanar-1-9B is a base model and can be finetuned for a varierty of usecases such as: - Conversational agents (Arabic only or bilingual) - Cultural and dialectal question answering in Arabic - Educational, governmental, and civic NLP applications focused on the Arab world or Arabic-speaking audiences - Research on Arabic natural language generation and understanding A finetuned version of Fanar-1-9B can be deployed as part of a broader AI system. Developers are encouraged to implement proper safeguards to ensure culturally respectful, accurate, and safe deployment. It should not be used to generate or spread **harmful, illegal, or misleading content.** --- ## Ethical Considerations & Limitations Fanar-1-9B- is capable of generating fluent and contextually appropriate responses. However, as with any generative model there are uncertainities. The model may produce **biased, offensive, or incorrect outputs**. The model is **not suitable for high-stakes decision-making** (e.g., legal, medical, or financial advice). Though we have extensively tested Fanar-1-9B and attempted to mitigate these issues, we cannot redress every possible scenario. Thus, we advise developers to implement safety checks and perform domain-specific fine-tuning for sensitive use cases. Kindly refer to our [Terms of Service]( https://chat.fanar.qa/terms-of-service) and [Privacy Policy](https://chat.fanar.qa/privacy-policy). The output generated by this model is not considered a statement of QCRI, HBKU, Qatar Foundation, MCIT or any other organization or individual. --- ## Evaluation Evaluation was conducted using a modified version of the LM Evaluation Harness and internal cultural alignment benchmarks. <div style="overflow-x: auto;"> | Model | MMLU (5-shot) | MMMLU (Arabic) (0-shot) | ArabicMMLU (3-shot) | HellaSwag (0-shot) | PIQA (0-shot) | ARC Challenge (0-shot) | Belebele (Arabic) (3-shot) | ACVA (5-shot) | GSM8k | OALL (0-shot) | OALL v2 (0-shot) | Almieyar Arabic (3-shot) | Arab Cultural MCQ (3-shot) | AraDiCE PIQA (MSA) (0-shot) | AraDiCE PIQA(Egy) (0-shot) | AraDiCE PIQA(Lev) (0-shot) | AraDiCE ArabicMMLU(Egy) (0-shot) | AraDiCE ArabicMMLU(Lev) (0-shot) | |-------|----------------|--------------------------|----------------------|--------------------|---------------|-------------------------|------------------------------|---------------|--------|----------------|------------------|---------------------------|-----------------------------|-------------------------------|------------------------------|------------------------------|-----------------------------------|-----------------------------------| | Fanar-1-9B | 71.33% | **57.38%** | **67.42%** | **80.76%** | 81.66% | 59.73% | **79.31%** | **81.31%** | **45.79%** | **54.94%** | **63.20%** | **77.18%** | **72.30%** | **66.00%** | **62.19%** | 57.67% | **55.79%** | **55.63%** | | AceGPT-v2-8B | 63.55% | 41.71% | 58.55% | 76.97% | 80.03% | 49.40% | 60.61% | 78.36% | 10.92% | 43.58% | 47.00% | 66.83% | 67.50% | 63.17% | 61.48% | 56.75% | 43.40% | 40.96% | | gemma-2-9b | 70.60% | 54.04% | 64.32% | 79.82% | **82.97%** | **65.53%** | 75.31% | 79.66% | 21.61% | 50.24% | 57.23% | 73.82% | 68.60% | 63.98% | 60.17% | 58.05% | 49.61% | 47.15% | | jais-adapted-13b | 50.42% | 34.01% | 51.96% | 78.02% | 78.94% | 48.55% | 43.02% | 73.52% | 5.76% | 40.79% | 40.06% | 62.34% | 60.90% | 65.02% | **62.19%** | **59.25%** | 38.24% | 37.93% | | jais-family-6p7b | 32.50% | 25.34% | 34.81% | 69.28% | 75.95% | 40.27% | 34.54% | 60.13% | 3.87% | 37.55% | 33.59% | 32.17% | 34.00% | 65.18% | 60.23% | 58.38% | 28.50% | 29.46% | | Llama-3.1-8B | 65.10% | 43.21% | 55.73% | 78.95% | 81.01% | 53.41% | 61.59% | 77.72% | 26.00% | 43.01% | 52.29% | 63.84% | 60.00% | 57.51% | 55.28% | 53.81% | 41.44% | 38.39% | | Qwen2.5-7B | **74.18%** | 51.77% | 65.08% | 78.95% | 79.71% | 51.37% | 71.72% | 80.37% | 9.40% | 48.66% | 59.40% | 76.81% | 65.70% | 59.68% | 57.51% | 55.44% | 47.33% | 49.26% | </div> --- ## Citation If you use Fanar-1-9B or [Fanar-1-9B-Instruct](https://huggingface.co/QCRI/Fanar-1-9B-Instruct) or the Fanar GenAI system in your research or applications, please cite: ```bibtex @misc{fanarllm2025, title={Fanar: An Arabic-Centric Multimodal Generative AI Platform}, author={Fanar Team and Ummar Abbas and Mohammad Shahmeer Ahmad and Firoj Alam and Enes Altinisik and Ehsannedin Asgari and Yazan Boshmaf and Sabri Boughorbel and Sanjay Chawla and Shammur Chowdhury and Fahim Dalvi and Kareem Darwish and Nadir Durrani and Mohamed Elfeky and Ahmed Elmagarmid and Mohamed Eltabakh and Masoomali Fatehkia and Anastasios Fragkopoulos and Maram Hasanain and Majd Hawasly and Mus'ab Husaini and Soon-Gyo Jung and Ji Kim Lucas and Walid Magdy and Safa Messaoud and Abubakr Mohamed and Tasnim Mohiuddin and Basel Mousi and Hamdy Mubarak and Ahmad Musleh and Zan Naeem and Mourad Ouzzani and Dorde Popovic and Amin Sadeghi and Husrev Taha Sencar and Mohammed Shinoy and Omar Sinan and Yifan Zhang and Ahmed Ali and Yassine El Kheir and Xiaosong Ma and Chaoyi Ruan}}, year={2025}, url={https://arxiv.org/abs/2501.13944}, } ``` --- ## Acknowledgements This project is from [Qatar Computing Research Institute (QCRI)](https://qcri.org) at [Hamad Bin Khalifa University (HBKU)](https://hbku.edu.qa), a member of Qatar Foundation. We thank our engineers, researchers, and support team for their efforts in advancing Arabic-centric large language models. Special thanks to the [Ministry of Communications and Information Technology, State of Qatar](https://www.mcit.gov.qa/en/) for their continued support by providing the compute infrastructure through the Google Cloud Platform. --- ## License This model is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
Ruthvik2835/llamafine_tine
Ruthvik2835
2025-06-05T10:45:30Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:codellama/CodeLlama-7b-Instruct-hf", "base_model:adapter:codellama/CodeLlama-7b-Instruct-hf", "license:llama2", "region:us" ]
null
2025-06-05T10:44:53Z
--- library_name: peft license: llama2 base_model: codellama/CodeLlama-7b-Instruct-hf tags: - generated_from_trainer model-index: - name: codellama-hugcoder results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codellama-hugcoder This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - training_steps: 10 ### Training results ### Framework versions - PEFT 0.15.2.dev0 - Transformers 4.53.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
EdwardTurner/Qwen2.5-14B-Instruct_R1_0_1_0_most_extreme_reduced_train
EdwardTurner
2025-06-05T10:45:02Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-05T10:39:58Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
realarslan33/mistral-girlfriend-lora-quantized
realarslan33
2025-06-05T10:45:00Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-05T10:44:08Z
--- base_model: unsloth/mistral-7b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** realarslan33 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
realarslan33/mistral-girlfriend-lora
realarslan33
2025-06-05T10:43:39Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-05T10:42:52Z
--- base_model: unsloth/mistral-7b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** realarslan33 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
xlight05/bal_coder_full
xlight05
2025-06-05T10:43:21Z
0
0
transformers
[ "transformers", "gguf", "qwen2", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-05T10:41:19Z
--- base_model: unsloth/qwen2.5-coder-7b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** xlight05 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-coder-7b-instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/AReaL-boba-2-32B-i1-GGUF
mradermacher
2025-06-05T10:43:16Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:inclusionAI/AReaL-boba-2-32B", "base_model:quantized:inclusionAI/AReaL-boba-2-32B", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-06-05T07:39:37Z
--- base_model: inclusionAI/AReaL-boba-2-32B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/inclusionAI/AReaL-boba-2-32B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/AReaL-boba-2-32B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/AReaL-boba-2-32B-i1-GGUF/resolve/main/AReaL-boba-2-32B.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/AReaL-boba-2-32B-i1-GGUF/resolve/main/AReaL-boba-2-32B.i1-IQ1_M.gguf) | i1-IQ1_M | 8.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/AReaL-boba-2-32B-i1-GGUF/resolve/main/AReaL-boba-2-32B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/AReaL-boba-2-32B-i1-GGUF/resolve/main/AReaL-boba-2-32B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | | | [GGUF](https://huggingface.co/mradermacher/AReaL-boba-2-32B-i1-GGUF/resolve/main/AReaL-boba-2-32B.i1-IQ2_S.gguf) | i1-IQ2_S | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/AReaL-boba-2-32B-i1-GGUF/resolve/main/AReaL-boba-2-32B.i1-IQ2_M.gguf) | i1-IQ2_M | 11.5 | | | [GGUF](https://huggingface.co/mradermacher/AReaL-boba-2-32B-i1-GGUF/resolve/main/AReaL-boba-2-32B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/AReaL-boba-2-32B-i1-GGUF/resolve/main/AReaL-boba-2-32B.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/AReaL-boba-2-32B-i1-GGUF/resolve/main/AReaL-boba-2-32B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/AReaL-boba-2-32B-i1-GGUF/resolve/main/AReaL-boba-2-32B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | | | [GGUF](https://huggingface.co/mradermacher/AReaL-boba-2-32B-i1-GGUF/resolve/main/AReaL-boba-2-32B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/AReaL-boba-2-32B-i1-GGUF/resolve/main/AReaL-boba-2-32B.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/AReaL-boba-2-32B-i1-GGUF/resolve/main/AReaL-boba-2-32B.i1-IQ3_M.gguf) | i1-IQ3_M | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/AReaL-boba-2-32B-i1-GGUF/resolve/main/AReaL-boba-2-32B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/AReaL-boba-2-32B-i1-GGUF/resolve/main/AReaL-boba-2-32B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/AReaL-boba-2-32B-i1-GGUF/resolve/main/AReaL-boba-2-32B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | | | [GGUF](https://huggingface.co/mradermacher/AReaL-boba-2-32B-i1-GGUF/resolve/main/AReaL-boba-2-32B.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/AReaL-boba-2-32B-i1-GGUF/resolve/main/AReaL-boba-2-32B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/AReaL-boba-2-32B-i1-GGUF/resolve/main/AReaL-boba-2-32B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AReaL-boba-2-32B-i1-GGUF/resolve/main/AReaL-boba-2-32B.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | | | [GGUF](https://huggingface.co/mradermacher/AReaL-boba-2-32B-i1-GGUF/resolve/main/AReaL-boba-2-32B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | | | [GGUF](https://huggingface.co/mradermacher/AReaL-boba-2-32B-i1-GGUF/resolve/main/AReaL-boba-2-32B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.3 | | | [GGUF](https://huggingface.co/mradermacher/AReaL-boba-2-32B-i1-GGUF/resolve/main/AReaL-boba-2-32B.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
ldeghellinck/w2v2-libri-10min
ldeghellinck
2025-06-05T10:41:35Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-05T10:29:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NORI7/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-savage_arctic_raven
NORI7
2025-06-05T10:41:19Z
31
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am savage arctic raven", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-07T23:42:49Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-savage_arctic_raven tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am savage arctic raven - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-savage_arctic_raven This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="NORI7/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-savage_arctic_raven", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/fujiyama-kazunori-personal/huggingface/runs/g17r40up) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
joanna302/Qwen3-8B-Base_fr_pt_zh_ar_2e-05
joanna302
2025-06-05T10:41:00Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "unsloth", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-05T08:49:20Z
--- library_name: transformers tags: - unsloth - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
stablediffusionapi/mistoon-anime
stablediffusionapi
2025-06-05T10:40:31Z
0
0
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-06-05T10:40:00Z
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true pipeline_tag: text-to-image library_name: diffusers widget: - text: a girl wandering through the forest output: url: https://cdn2.stablediffusionapi.com/generations/11172860841693956414.png --- # Mistoon Anime API Inference <Gallery /> ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "mistoon-anime" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com) Try model for free: [Generate Images](https://modelslab.com/models/mistoon-anime) Model link: [View model](https://modelslab.com/models/mistoon-anime) View all models: [View Models](https://modelslab.com/models) ```python import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "mistoon-anime", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "", "lora": "", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) ``` > Use this coupon code to get 25% off **DMGG0RBN**
medimed/finetuned_Qwen3
medimed
2025-06-05T10:39:40Z
16
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Qwen3-0.6B-Base", "base_model:finetune:unsloth/Qwen3-0.6B-Base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T15:21:12Z
--- base_model: unsloth/Qwen3-0.6B-Base tags: - text-generation-inference - transformers - unsloth - qwen3 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** medimed - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-0.6B-Base This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
stablediffusionapi/mistoonanimeofic
stablediffusionapi
2025-06-05T10:39:30Z
0
0
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-06-05T10:38:58Z
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true pipeline_tag: text-to-image library_name: diffusers widget: - text: a girl wandering through the forest output: url: https://cdn2.stablediffusionapi.com/generations/6350504101693959922.png --- # MistoonAnimeOfic API Inference <Gallery /> ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "mistoonanimeofic" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com) Try model for free: [Generate Images](https://modelslab.com/models/mistoonanimeofic) Model link: [View model](https://modelslab.com/models/mistoonanimeofic) View all models: [View Models](https://modelslab.com/models) ```python import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "mistoonanimeofic", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "", "lora": "", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) ``` > Use this coupon code to get 25% off **DMGG0RBN**
medimed/finetuned_MCQwen3_lora
medimed
2025-06-05T10:38:20Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Qwen3-0.6B-Base", "base_model:finetune:unsloth/Qwen3-0.6B-Base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-05T10:29:14Z
--- base_model: unsloth/Qwen3-0.6B-Base tags: - text-generation-inference - transformers - unsloth - qwen3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** medimed - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-0.6B-Base This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Sky67856785/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tough_elusive_dinosaur
Sky67856785
2025-06-05T10:36:16Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am tough elusive dinosaur", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-26T13:45:24Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tough_elusive_dinosaur tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am tough elusive dinosaur - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tough_elusive_dinosaur This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Sky67856785/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-tough_elusive_dinosaur", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/skywalker32048-sarainwalk/huggingface/runs/z5b9mcbr) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
stablediffusionapi/copax-realistic-xl
stablediffusionapi
2025-06-05T10:35:27Z
0
0
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-06-05T10:34:12Z
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true pipeline_tag: text-to-image library_name: diffusers widget: - text: a girl wandering through the forest output: url: https://cdn.stablediffusionapi.com/generations/4002404581690817323.png --- # Copax Realistic XL - SDXL1.0 V2 API Inference <Gallery /> ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "copax-realistic-xl" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com) Try model for free: [Generate Images](https://modelslab.com/models/copax-realistic-xl) Model link: [View model](https://modelslab.com/models/copax-realistic-xl) View all models: [View Models](https://modelslab.com/models) ```python import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "copax-realistic-xl", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "", "lora": "", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) ``` > Use this coupon code to get 25% off **DMGG0RBN**
nurik0210/Qwen2.5-7b-uzb-lora-adapter
nurik0210
2025-06-05T10:34:14Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-7B-Instruct", "region:us" ]
null
2025-06-05T10:34:00Z
--- base_model: Qwen/Qwen2.5-7B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
tencent/HunyuanCustom
tencent
2025-06-05T10:33:53Z
0
167
null
[ "safetensors", "image-to-video", "en", "arxiv:2505.04512", "base_model:tencent/HunyuanVideo", "base_model:finetune:tencent/HunyuanVideo", "region:us" ]
image-to-video
2025-05-08T13:37:04Z
--- language: - en base_model: - tencent/HunyuanVideo pipeline_tag: image-to-video --- <!-- ## **HunyuanCustom** --> <p align="center"> <img src="assets/material/logo.png" height=100> </p> # **HunyuanCustom** ๐ŸŒ… <div align="center"> <a href="https://github.com/Tencent/HunyuanCustom"><img src="https://img.shields.io/static/v1?label=HunyuanCustom%20Code&message=Github&color=blue"></a> &ensp; <a href="https://hunyuancustom.github.io/"><img src="https://img.shields.io/static/v1?label=Project%20Page&message=Web&color=green"></a> &ensp; <a href="https://hunyuan.tencent.com/modelSquare/home/play?modelId=192"><img src="https://img.shields.io/static/v1?label=Playground&message=Web&color=green"></a> </div> <div align="center"> <a href="https://arxiv.org/pdf/2505.04512"><img src="https://img.shields.io/static/v1?label=Tech Report&message=Arxiv&color=red"></a> &ensp; </div> <div align="center"> <a href="https://huggingface.co/tencent/HunyuanCustom"><img src="https://img.shields.io/static/v1?label=HunyuanVideo&message=HuggingFace&color=yellow"></a> &ensp; </div> ----- > [**HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation**](https://arxiv.org/pdf/2505.04512) <be> ## ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ News!! * May 8, 2025: ๐Ÿ‘‹ We release the inference code and model weights of HunyuanCustom. [Download](models/README.md). ## ๐Ÿ“‘ Open-source Plan - HunyuanCustom - Single-Subject Video Customization - [x] Inference - [x] Checkpoints - [ ] ComfyUI - Audio-Driven Video Customization - Video-Driven Video Customization - Multi-Subject Video Customization ## Contents - [**HunyuanCustom** ๐ŸŒ…](#hunyuancustom-) - [๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ News!!](#-news) - [๐Ÿ“‘ Open-source Plan](#-open-source-plan) - [Contents](#contents) - [**Abstract**](#abstract) - [**HunyuanCustom Overall Architecture**](#hunyuancustom-overall-architecture) - [๐ŸŽ‰ **HunyuanCustom Key Features**](#-hunyuancustom-key-features) - [**Multimodal Video customization**](#multimodal-video-customization) - [**Various Applications**](#various-applications) - [๐Ÿ“ˆ Comparisons](#-comparisons) - [๐Ÿ“œ Requirements](#-requirements) - [๐Ÿ› ๏ธ Dependencies and Installation](#๏ธ-dependencies-and-installation) - [Installation Guide for Linux](#installation-guide-for-linux) - [๐Ÿงฑ Download Pretrained Models](#-download-pretrained-models) - [๐Ÿš€ Parallel Inference on Multiple GPUs](#-parallel-inference-on-multiple-gpus) - [๐Ÿ”‘ Single-gpu Inference](#-single-gpu-inference) - [Run with very low VRAM](#run-with-very-low-vram) - [Run a Gradio Server](#run-a-gradio-server) - [๐Ÿ”— BibTeX](#-bibtex) - [Acknowledgements](#acknowledgements) --- ## **Abstract** Customized video generation aims to produce videos featuring specific subjects under flexible user-defined conditions, yet existing methods often struggle with identity consistency and limited input modalities. In this paper, we propose HunyuanCustom, a multi-modal customized video generation framework that emphasizes subject consistency while supporting image, audio, video, and text conditions. Built upon HunyuanVideo, our model first addresses the image-text conditioned generation task by introducing a text-image fusion module based on LLaVA for enhanced multi-modal understanding, along with an image ID enhancement module that leverages temporal concatenation to reinforce identity features across frames. To enable audio- and video-conditioned generation, we further propose modality-specific condition injection mechanisms: an AudioNet module that achieves hierarchical alignment via spatial cross-attention, and a video-driven injection module that integrates latent-compressed conditional video through a patchify-based feature-alignment network. Extensive experiments on single- and multi-subject scenarios demonstrate that HunyuanCustom significantly outperforms state-of-the-art open- and closed-source methods in terms of ID consistency, realism, and text-video alignment. Moreover, we validate its robustness across downstream tasks, including audio and video-driven customized video generation. Our results highlight the effectiveness of multi-modal conditioning and identity-preserving strategies in advancing controllable video generation. ## **HunyuanCustom Overall Architecture** ![image](assets/material/method.png) We propose **HunyuanCustom, a multi-modal, conditional, and controllable generation model centered on subject consistency**, built upon the Hunyuan Video generation framework. It enables the generation of subject-consistent videos conditioned on text, images, audio, and video inputs. ## ๐ŸŽ‰ **HunyuanCustom Key Features** ### **Multimodal Video customization** HunyuanCustom supports inputs in the form of **text, images, audio, and video**. Specifically, it can handle single or multiple image inputs to enable customized video generation for one or more subjects. Additionally, it can incorporate extra audio inputs to drive the subject to speak the corresponding audio. Lastly, HunyuanCustom supports video input, allowing for the replacement of specified objects in the video with subjects from a given image. ![image](assets/material/teaser.png) ### **Various Applications** With the multi-modal capabilities of HunyuanCustom, numerous downstream tasks can be accomplished. For instance, by taking multiple images as input, HunyuanCustom can facilitate **virtual human advertisements** and **virtual try-on**. Additionally, with image and audio inputs, it can create **singing avatars**. Furthermore, by using an image and a video as inputs, HunyuanCustom supports **video editing** by replacing subjects in the video with those in the provided image. More applications await your exploration! ![image](assets/material/application.png) ## ๐Ÿ“ˆ Comparisons To evaluate the performance of HunyuanCustom, we compared it with state-of-the-art video customization methods, including VACE, Skyreels, Pika, Vidu, Keling, and Hailuo. The comparison focused on face/subject consistency, video-text alignment, and overall video quality. | Models | Face-Sim | CLIP-B-T | DINO-Sim | Temp-Consis | DD | |-------------------|----------|----------|----------|-------------|------| | VACE-1.3B | 0.204 | _0.308_ | 0.569 | **0.967** | 0.53 | | Skyreels | 0.402 | 0.295 | 0.579 | 0.942 | 0.72 | | Pika | 0.363 | 0.305 | 0.485 | 0.928 | _0.89_ | | Vidu2.0 | 0.424 | 0.300 | 0.537 | _0.961_ | 0.43 | | Keling1.6 | 0.505 | 0.285 | _0.580_ | 0.914 | 0.78 | | Hailuo | _0.526_ | **0.314**| 0.433 | 0.937 | **0.94** | | **HunyuanCustom (Ours)** | **0.627**| 0.306 | **0.593**| 0.958 | 0.71 | ## ๐Ÿ“œ Requirements The following table shows the requirements for running HunyuanCustom model (batch size = 1) to generate videos: | Model | Setting<br/>(height/width/frame) | GPU Peak Memory | |:------------:|:--------------------------------:|:----------------:| | HunyuanCustom | 720px1280px129f | 80GB | | HunyuanCustom | 512px896px129f | 60GB | * An NVIDIA GPU with CUDA support is required. * The model is tested on a machine with 8GPUs. * **Minimum**: The minimum GPU memory required is 24GB for 720px1280px129f but very slow. * **Recommended**: We recommend using a GPU with 80GB of memory for better generation quality. * Tested operating system: Linux ## ๐Ÿ› ๏ธ Dependencies and Installation Begin by cloning the repository: ```shell git clone https://github.com/Tencent/HunyuanCustom.git cd HunyuanCustom ``` ### Installation Guide for Linux We recommend CUDA versions 12.4 or 11.8 for the manual installation. Conda's installation instructions are available [here](https://docs.anaconda.com/free/miniconda/index.html). ```shell # 1. Create conda environment conda create -n HunyuanCustom python==3.10.9 # 2. Activate the environment conda activate HunyuanCustom # 3. Install PyTorch and other dependencies using conda # For CUDA 11.8 conda install pytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 pytorch-cuda=11.8 -c pytorch -c nvidia # For CUDA 12.4 conda install pytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 pytorch-cuda=12.4 -c pytorch -c nvidia # 4. Install pip dependencies python -m pip install -r requirements.txt python -m pip install tensorrt-cu12-bindings==10.6.0 tensorrt-cu12-libs==10.6.0 # 5. Install flash attention v2 for acceleration (requires CUDA 11.8 or above) python -m pip install ninja python -m pip install git+https://github.com/Dao-AILab/[email protected] ``` In case of running into float point exception(core dump) on the specific GPU type, you may try the following solutions: ```shell # Option 1: Making sure you have installed CUDA 12.4, CUBLAS>=12.4.5.8, and CUDNN>=9.00 (or simply using our CUDA 12 docker image). pip install nvidia-cublas-cu12==12.4.5.8 export LD_LIBRARY_PATH=/opt/conda/lib/python3.8/site-packages/nvidia/cublas/lib/ # Option 2: Forcing to explictly use the CUDA 11.8 compiled version of Pytorch and all the other packages pip uninstall -r requirements.txt # uninstall all packages pip install torch==2.4.0 --index-url https://download.pytorch.org/whl/cu118 pip install -r requirements.txt pip install ninja pip install git+https://github.com/Dao-AILab/[email protected] ``` Additionally, you can also use HunyuanVideo Docker image. Use the following command to pull and run the docker image. ```shell # For CUDA 12.4 (updated to avoid float point exception) docker pull hunyuanvideo/hunyuanvideo:cuda_12 docker run -itd --gpus all --init --net=host --uts=host --ipc=host --name hunyuanvideo --security-opt=seccomp=unconfined --ulimit=stack=67108864 --ulimit=memlock=-1 --privileged hunyuanvideo/hunyuanvideo:cuda_12 pip install gradio==3.39.0 # For CUDA 11.8 docker pull hunyuanvideo/hunyuanvideo:cuda_11 docker run -itd --gpus all --init --net=host --uts=host --ipc=host --name hunyuanvideo --security-opt=seccomp=unconfined --ulimit=stack=67108864 --ulimit=memlock=-1 --privileged hunyuanvideo/hunyuanvideo:cuda_11 pip install gradio==3.39.0 ``` ## ๐Ÿงฑ Download Pretrained Models The details of download pretrained models are shown [here](models/README.md). ## ๐Ÿš€ Parallel Inference on Multiple GPUs For example, to generate a video with 8 GPUs, you can use the following command: ```bash cd HunyuanCustom export MODEL_BASE="./models" export PYTHONPATH=./ torchrun --nnodes=1 --nproc_per_node=8 --master_port 29605 hymm_sp/sample_batch.py \ --input './assets/images/seg_woman_01.png' \ --pos-prompt "Realistic, High-quality. A woman is drinking coffee at a cafรฉ." \ --neg-prompt "Aerial view, aerial view, overexposed, low quality, deformation, a poor composition, bad hands, bad teeth, bad eyes, bad limbs, distortion, blurring, text, subtitles, static, picture, black border." \ --ckpt ${MODEL_BASE}"/hunyuancustom_720P/mp_rank_00_model_states.pt" \ --video-size 720 1280 \ --seed 1024 \ --sample-n-frames 129 \ --infer-steps 30 \ --flow-shift-eval-video 13.0 \ --save-path './results/sp_720p' ``` ## ๐Ÿ”‘ Single-gpu Inference For example, to generate a video with 1 GPU, you can use the following command: ```bash cd HunyuanCustom export MODEL_BASE="./models" export CPU_OFFLOAD=1 export PYTHONPATH=./ python hymm_sp/sample_gpu_poor.py \ --input './assets/images/seg_woman_01.png' \ --pos-prompt "Realistic, High-quality. A woman is drinking coffee at a cafรฉ." \ --neg-prompt "Aerial view, aerial view, overexposed, low quality, deformation, a poor composition, bad hands, bad teeth, bad eyes, bad limbs, distortion, blurring, text, subtitles, static, picture, black border." \ --ckpt ${MODEL_BASE}"/hunyuancustom_720P/mp_rank_00_model_states_fp8.pt" \ --video-size 512 896 \ --seed 1024 \ --sample-n-frames 129 \ --infer-steps 30 \ --flow-shift-eval-video 13.0 \ --save-path './results/1gpu_540p' \ --use-fp8 ``` ### Run with very low VRAM ```bash cd HunyuanCustom export MODEL_BASE="./models" export CPU_OFFLOAD=1 export PYTHONPATH=./ python hymm_sp/sample_gpu_poor.py \ --input './assets/images/seg_woman_01.png' \ --pos-prompt "Realistic, High-quality. A woman is drinking coffee at a cafรฉ." \ --neg-prompt "Aerial view, aerial view, overexposed, low quality, deformation, a poor composition, bad hands, bad teeth, bad eyes, bad limbs, distortion, blurring, text, subtitles, static, picture, black border." \ --ckpt ${MODEL_BASE}"/hunyuancustom_720P/mp_rank_00_model_states_fp8.pt" \ --video-size 720 1280 \ --seed 1024 \ --sample-n-frames 129 \ --infer-steps 30 \ --flow-shift-eval-video 13.0 \ --save-path './results/cpu_720p' \ --use-fp8 \ --cpu-offload ``` ## Run a Gradio Server ```bash cd HunyuanCustom bash ./scripts/run_gradio.sh ``` ## ๐Ÿ”— BibTeX If you find [HunyuanCustom](https://arxiv.org/abs/2505.04512) useful for your research and applications, please cite using this BibTeX: ```BibTeX @misc{hu2025hunyuancustommultimodaldrivenarchitecturecustomized, title={HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation}, author={Teng Hu and Zhentao Yu and Zhengguang Zhou and Sen Liang and Yuan Zhou and Qin Lin and Qinglin Lu}, year={2025}, eprint={2505.04512}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2505.04512}, } ``` ## Acknowledgements We would like to thank the contributors to the [HunyuanVideo](https://github.com/Tencent/HunyuanVideo), [SD3](https://huggingface.co/stabilityai/stable-diffusion-3-medium), [FLUX](https://github.com/black-forest-labs/flux), [Llama](https://github.com/meta-llama/llama), [LLaVA](https://github.com/haotian-liu/LLaVA), [Xtuner](https://github.com/InternLM/xtuner), [diffusers](https://github.com/huggingface/diffusers) and [HuggingFace](https://huggingface.co) repositories, for their open research and exploration.
mradermacher/Malaysian-Qwen2.5-1.5B-Reasoning-SFT-GGUF
mradermacher
2025-06-05T10:33:10Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:mesolitica/Malaysian-Qwen2.5-1.5B-Reasoning-SFT", "base_model:quantized:mesolitica/Malaysian-Qwen2.5-1.5B-Reasoning-SFT", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-05T10:22:33Z
--- base_model: mesolitica/Malaysian-Qwen2.5-1.5B-Reasoning-SFT language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/mesolitica/Malaysian-Qwen2.5-1.5B-Reasoning-SFT <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-1.5B-Reasoning-SFT-GGUF/resolve/main/Malaysian-Qwen2.5-1.5B-Reasoning-SFT.Q2_K.gguf) | Q2_K | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-1.5B-Reasoning-SFT-GGUF/resolve/main/Malaysian-Qwen2.5-1.5B-Reasoning-SFT.Q3_K_S.gguf) | Q3_K_S | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-1.5B-Reasoning-SFT-GGUF/resolve/main/Malaysian-Qwen2.5-1.5B-Reasoning-SFT.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-1.5B-Reasoning-SFT-GGUF/resolve/main/Malaysian-Qwen2.5-1.5B-Reasoning-SFT.Q3_K_L.gguf) | Q3_K_L | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-1.5B-Reasoning-SFT-GGUF/resolve/main/Malaysian-Qwen2.5-1.5B-Reasoning-SFT.IQ4_XS.gguf) | IQ4_XS | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-1.5B-Reasoning-SFT-GGUF/resolve/main/Malaysian-Qwen2.5-1.5B-Reasoning-SFT.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-1.5B-Reasoning-SFT-GGUF/resolve/main/Malaysian-Qwen2.5-1.5B-Reasoning-SFT.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-1.5B-Reasoning-SFT-GGUF/resolve/main/Malaysian-Qwen2.5-1.5B-Reasoning-SFT.Q5_K_S.gguf) | Q5_K_S | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-1.5B-Reasoning-SFT-GGUF/resolve/main/Malaysian-Qwen2.5-1.5B-Reasoning-SFT.Q5_K_M.gguf) | Q5_K_M | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-1.5B-Reasoning-SFT-GGUF/resolve/main/Malaysian-Qwen2.5-1.5B-Reasoning-SFT.Q6_K.gguf) | Q6_K | 1.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-1.5B-Reasoning-SFT-GGUF/resolve/main/Malaysian-Qwen2.5-1.5B-Reasoning-SFT.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Malaysian-Qwen2.5-1.5B-Reasoning-SFT-GGUF/resolve/main/Malaysian-Qwen2.5-1.5B-Reasoning-SFT.f16.gguf) | f16 | 3.7 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
yassineturki/temp_qlora_to_test
yassineturki
2025-06-05T10:32:54Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-06-05T10:32:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BootesVoid/cmb97cn6t082y1b1ykyjs6ytk_cmbj6qngt0avzkfxsdwx6ddwx
BootesVoid
2025-06-05T10:31:12Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-05T10:31:08Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: LUSTYLISA --- # Cmb97Cn6T082Y1B1Ykyjs6Ytk_Cmbj6Qngt0Avzkfxsdwx6Ddwx <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `LUSTYLISA` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "LUSTYLISA", "lora_weights": "https://huggingface.co/BootesVoid/cmb97cn6t082y1b1ykyjs6ytk_cmbj6qngt0avzkfxsdwx6ddwx/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmb97cn6t082y1b1ykyjs6ytk_cmbj6qngt0avzkfxsdwx6ddwx', weight_name='lora.safetensors') image = pipeline('LUSTYLISA').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmb97cn6t082y1b1ykyjs6ytk_cmbj6qngt0avzkfxsdwx6ddwx/discussions) to add images that show off what youโ€™ve made with this LoRA.
liberalusa/LiberalMind_v1.5
liberalusa
2025-06-05T10:31:09Z
0
2
peft
[ "peft", "safetensors", "qwen2", "text-generation", "instruction-tuned", "qwen", "reasoning", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "region:us" ]
text-generation
2025-06-02T19:32:29Z
--- license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct library_name: peft tags: - text-generation - instruction-tuned - qwen - peft - reasoning inference: true --- license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct library_name: peft tags: - text-generation - instruction-tuned - reasoning - qwen - peft inference: true --- # ๐Ÿง  Qwen2.5-7B-Instruct โ€” Reasoning Model (PEFT) ## ๐Ÿ“Œ ะžะฑ ะฐะฒั‚ะพั€ะต ะญั‚ะฐ ะผะพะดะตะปัŒ ัะพะทะดะฐะฝะฐ ะธ ะดะพะพะฑัƒั‡ะตะฝะฐ ะธััะปะตะดะพะฒะฐั‚ะตะปะตะผ **[@liberalusa](https://huggingface.co/liberalusa)** ั ั„ะพะบัƒัะพะผ ะฝะฐ ั€ะฐะทะฒะธั‚ะธะต ัะทั‹ะบะพะฒั‹ั… ะผะพะดะตะปะตะน, ัะฟะพัะพะฑะฝั‹ั… ะบ ั€ะฐัััƒะถะดะตะฝะธัŽ, ะพะฑัŠััะฝะตะฝะธัŽ ะธ ะปะพะณะธั‡ะตัะบะพะผัƒ ะผั‹ัˆะปะตะฝะธัŽ. ะžัะฝะพะฒะฝะฐั ั†ะตะปัŒ ะฟั€ะพะตะบั‚ะฐ โ€” ัะดะตะปะฐั‚ัŒ ัˆะฐะณ ะฒ ัั‚ะพั€ะพะฝัƒ ะฑะพะปะตะต ะธะฝั‚ะตั€ะฟั€ะตั‚ะธั€ัƒะตะผะพะณะพ ะธ ะธะฝั‚ะตะปะปะตะบั‚ัƒะฐะปัŒะฝะพะณะพ ะธัะบัƒััั‚ะฒะตะฝะฝะพะณะพ ะธะฝั‚ะตะปะปะตะบั‚ะฐ. ะ ะฐะฑะพั‚ะฐ ะฒั‹ะฟะพะปะฝะตะฝะฐ ั ะธัะฟะพะปัŒะทะพะฒะฐะฝะธะตะผ ะฟะพะดั…ะพะดะฐ **Parameter-Efficient Fine-Tuning (PEFT)**, ะบะพั‚ะพั€ั‹ะน ะฟะพะทะฒะพะปัะตั‚ ะดะพะพะฑัƒั‡ะฐั‚ัŒ ะฑะพะปัŒัˆะธะต ัะทั‹ะบะพะฒั‹ะต ะผะพะดะตะปะธ ัั„ั„ะตะบั‚ะธะฒะฝะพ, ะฑะตะท ะฝะตะพะฑั…ะพะดะธะผะพัั‚ะธ ะผะฐััˆั‚ะฐะฑะฝั‹ั… ะฒั‹ั‡ะธัะปะธั‚ะตะปัŒะฝั‹ั… ั€ะตััƒั€ัะพะฒ. ะญั‚ะพ ะดะตะปะฐะตั‚ ะผะพะดะตะปัŒ ะดะพัั‚ัƒะฟะฝะพะน ะดะปั ัˆะธั€ะพะบะพะณะพ ะบั€ัƒะณะฐ ะธััะปะตะดะพะฒะฐั‚ะตะปะตะน ะธ ั€ะฐะทั€ะฐะฑะพั‚ั‡ะธะบะพะฒ. --- ## ๐Ÿง  ะž ะผะพะดะตะปะธ ะญั‚ะฐ ะผะพะดะตะปัŒ โ€” ะฐะดะฐะฟั‚ะตั€ ะดะปั ะพั€ะธะณะธะฝะฐะปัŒะฝะพะน [`Qwen/Qwen2.5-7B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct), ะดะพะพะฑัƒั‡ะตะฝะฝั‹ะน ั ะฐะบั†ะตะฝั‚ะพะผ ะฝะฐ ะปะพะณะธั‡ะตัะบะธะต ะทะฐะดะฐั‡ะธ ะธ ะธะฝัั‚ั€ัƒะบั†ะธะธ, ั‚ั€ะตะฑัƒัŽั‰ะธะต ะผะฝะพะณะพัˆะฐะณะพะฒั‹ั… ะพะฑัŠััะฝะตะฝะธะน. ะœะพะดะตะปัŒ ะฟะพะบะฐะทั‹ะฒะฐะตั‚ ัƒะปัƒั‡ัˆะตะฝะฝั‹ะต ั€ะตะทัƒะปัŒั‚ะฐั‚ั‹ ะฝะฐ ะทะฐะดะฐั‡ะฐั…: - Chain-of-thought reasoning (ะฟะพัั‚ะฐะฟะฝะพะต ั€ะฐัััƒะถะดะตะฝะธะต) - ะžั‚ะฒะตั‚ั‹ ั ะพะฑัŠััะฝะตะฝะธัะผะธ - ะ˜ะฝัั‚ั€ัƒะบั†ะธะธ, ั‚ั€ะตะฑัƒัŽั‰ะธะต ะฐะฝะฐะปะธั‚ะธะบะธ ะธะปะธ ะฐั€ะณัƒะผะตะฝั‚ะฐั†ะธะธ ะœะพะดะตะปัŒ ะผะพะถะตั‚ ะฟั€ะธะผะตะฝัั‚ัŒัั ะบะฐะบ ะฒ ะธััะปะตะดะพะฒะฐั‚ะตะปัŒัะบะธั…, ั‚ะฐะบ ะธ ะฒ ะฟั€ะธะบะปะฐะดะฝั‹ั… ะฟั€ะพะตะบั‚ะฐั….ะœะพะดะตะปัŒ ะพะฑัƒั‡ะฐะฝะฐ ะฝะฐ LORA ะฝะฐ ัะปะพะถะฝั‹ั… ะทะฐะดะฐั‡ะฐั… ะฟั€ะพะณั€ะฐะผะผะธั€ะพะฒะฐะฝะธั ะผะฐั‚ะตะผะฐั‚ะธะบะธ,ะฑะธะพะปะพะณะธะธ ะธ ัะพั†ะธะฐะปัŒะฝั‹ั… ะฝะฐัƒะบ.ะ‘ั‹ะป ั€ะฐะทั€ะฐะฑะพั‚ะฐะฝ ะฝะพะฒั‹ะน ะผะตั‚ะพะด ั€ะธะทะพะฝะธะฝะณะฐ ั‡ะตั€ะตะท ัะฐะผะพะบะพั€ั€ะตะบั†ะธัŽ ะฝะฐ ะฒะธะทัƒะฐะปะธะทะฐั†ะธะธ ั‚ะพะบะตะฝะพะฒ ะณะตะฝะตั€ะฐั†ะธะธ.ะะฐ ะฑะตะฝั‡ะผะฐั€ะบะฐั… ะพะฝ ะฝะฐ ัƒั€ะพะฒะฝะต gemini 2.5 pro, ะผะพะถะตั‚ะต ัะฐะผะธ ะฟั€ะพะฒะตั€ะธั‚ัŒ, ะผะพะดะตะปัŒ ะพั‚ะบั€ั‹ั‚ะฐ. --- ## ๐ŸŽฏ ะฆะตะปัŒ ะฟั€ะพะตะบั‚ะฐ ะกะพะทะดะฐั‚ัŒ ะดะพัั‚ัƒะฟะฝั‹ะน ะธ ะฐะดะฐะฟั‚ะธั€ัƒะตะผั‹ะน ะธะฝัั‚ั€ัƒะผะตะฝั‚, ัะฟะพัะพะฑะฝั‹ะน ะณะตะฝะตั€ะธั€ะพะฒะฐั‚ัŒ ะฝะต ะฟั€ะพัั‚ะพ ั‚ะตะบัั‚, ะฐ **ะฐั€ะณัƒะผะตะฝั‚ะธั€ะพะฒะฐะฝะฝัƒัŽ ะผั‹ัะปัŒ**. ะžัะฝะพะฒะฝั‹ะต ะฝะฐะฟั€ะฐะฒะปะตะฝะธั ะฟั€ะธะผะตะฝะตะฝะธั: - ะžะฑั€ะฐะทะพะฒะฐั‚ะตะปัŒะฝั‹ะต ะฐััะธัั‚ะตะฝั‚ั‹ - ะะฐัƒั‡ะฝะพ-ะฟะพะฟัƒะปัั€ะฝั‹ะต ะธ ั‚ะตั…ะฝะธั‡ะตัะบะธะต ะพะฑัŠััะฝะตะฝะธั - ะŸะพะดะดะตั€ะถะบะฐ ะปะพะณะธั‡ะตัะบะธั… ะฐะณะตะฝั‚ะพะฒ ะธ reasoning-ัะธัั‚ะตะผ ะ•ัะปะธ ะฒั‹ ั…ะพั‚ะธั‚ะต ะธัะฟะพะปัŒะทะพะฒะฐั‚ัŒ ะผะพะดะตะปัŒ, ั‚ะพ ั‡ั‚ะพะฑั‹ ะทะฐั€ะฐะฑะพั‚ะฐะป ะบะพะด ะผะพะดะตะปะธ ั ั€ะธะทะพะฝะธะฝะณะพะผ ะฝัƒะถะฝะพ ัะพะทะดะฐั‚ัŒ ะฟะฐะฟะบัƒ weights ะธ ะทะฐะณั€ัƒะทะธั‚ัŒ ะฒะตัะฐ Lora ะฟะพะด ะฝะฐะทะฒะฐะฝะธะตะผ: adapter_config, adapter_model(ะพัะฝะพะฒะฝั‹ะต ะฒะตัะฐ Lora), training_config --- ## ๐Ÿ“„ ะ›ะธั†ะตะฝะทะธั ะœะพะดะตะปัŒ ั€ะฐัะฟั€ะพัั‚ั€ะฐะฝัะตั‚ัั ะฟะพะด ะปะธั†ะตะฝะทะธะตะน **Apache 2.0**, ัะฒะพะฑะพะดะฝะฐ ะดะปั ะธััะปะตะดะพะฒะฐั‚ะตะปัŒัะบะพะณะพ ะธ ะบะพะผะผะตั€ั‡ะตัะบะพะณะพ ะธัะฟะพะปัŒะทะพะฒะฐะฝะธั. --- *ะŸะพะดะดะตั€ะถะบะฐ ะธ ะพะฑะฝะพะฒะปะตะฝะธั ะดะพัั‚ัƒะฟะฝั‹ ั‡ะตั€ะตะท ะฟั€ะพั„ะธะปัŒ ะฐะฒั‚ะพั€ะฐ: [@your-username](https://huggingface.co/liberalusa)* --- ## ๐Ÿš€ Usage (Transformers) You can load and use this model with PEFT and Transformers: ```python # Install necessary libraries if not already present # !pip install transformers torch accelerate bitsandbytes sentencepiece matplotlib import torch from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftModel import matplotlib.pyplot as plt import re import numpy as np import textwrap # For wrapping long labels in plot # --- Solver Parameters --- # These control the behavior of the prompts # We'll use a simplified version for this example, but you can expand it. # The .toFixed(2) from JS will be handled by Python's f-string formatting. solverParams = { "depth_focus_max": 9.50, "creativity_focus_max": 9.50, "analytical_rigor_max": 9.80, "efficiency_focus_max": 9.00, "alternative_exploration_max": 9.20, "depth_focus_simple": 6.00, "creativity_focus_simple": 5.00, "analytical_rigor_simple": 6.00, "efficiency_focus_simple": 4.00, "alternative_exploration_simple": 5.00, # Temperatures (can be adjusted) "initial_gen_temp_single": 0.7, "verify_temp": 0.2, # Low temp for critique for consistency "refine_temp": 0.6, "synthesis_temp": 0.5, # Max tokens (adjust based on model and task) "max_initial_tokens": 1500, "max_critique_tokens": 1000, "max_refine_tokens": 2000, "max_synthesis_tokens": 3000, # Includes meta-analysis and final response } # --- Prompts (Copied and adapted for Python f-strings) --- # Initial Generation Prompts INITIAL_GEN_PROMPT_MAXIMIZED = """ TASK: Generate a PROFOUNDLY INSIGHTFUL, TECHNICALLY MAXIMAL, HIGHLY CREATIVE, and RIGOROUSLY ANALYZED initial response. Emulate a PPO agent maximizing a reward function for **radical discovery, technical elegance, and absolute correctness**, especially for CODE/MATH. Go **exponentially beyond** the obvious; seek multiple, high-quality, **fundamentally diverse**, unconventional technical solutions backed by **unshakeable reasoning**. (Depth Focus: {solverParams[depth_focus_max]:.2f}, Creativity Focus: {solverParams[creativity_focus_max]:.2f}, Rigor Focus: {solverParams[analytical_rigor_max]:.2f}, Efficiency Focus: {solverParams[efficiency_focus_max]:.2f}) GUIDING PRINCIPLES (MAXIMIZED for Deep Exploration, Creativity & Rigor): 1. **EXPLORE SOLUTION SPACE EXPONENTIALLY:** * Brainstorm **radically different** algorithms, paradigms, data structures, coding patterns, mathematical frameworks, proof strategies, or interpretations. Reject incrementalism. (Alternative Exploration MAX: {solverParams[alternative_exploration_max]:.2f}) * Actively pursue **novel, obscure, or cutting-edge** libraries, theorems, or methodologies. Push the boundaries of standard practice. MAXIMIZE CREATIVITY REWARD. 2. **SEEK MAXIMUM INSIGHT, NOVELTY & OPTIMAL EFFICIENCY:** * Hunt for **non-obvious, maximally elegant, theoretically optimal, or creatively groundbreaking** solutions. Actively challenge conventions. * Provide **exceptionally deep, rigorous, quantitative analysis** of trade-offs (e.g., asymptotic AND constant factor complexity, numerical precision/stability, scalability limits, maintainability impact). JUSTIFY EVERYTHING WITH EXTREME RIGOR. * Uncover and elucidate the fundamental mathematical principles or advanced programming paradigms governing the problem. Aim for **complete conceptual mastery**. 3. **DEMOLISH ASSUMPTIONS & DEFINE SCOPE WITH UTMOST PRECISION:** * Identify and **aggressively interrogate** implicit assumptions. Explore the **full spectrum** of consequences if relaxed or changed. * Define constraints with **mathematical precision** or propose explicitly justified assumptions, analyzing their impact with **exhaustive rigor**. 4. **ANTICIPATE ALL EDGE CASES & GUARANTEE ABSOLUTE ROBUSTNESS:** * Proactively identify and address **every conceivable edge case**, failure mode, security vulnerability, mathematical singularity/degeneracy. Design for **provable robustness**. 5. **GENERATE DIVERSE, FLAWLESS, DEEPLY ANALYZED OPTIONS:** * Generate **multiple, distinct, complete, runnable/provable, and EXHAUSTIVELY analyzed technical options**. * Provide **razor-sharp, critical comparisons** highlighting subtle yet crucial pros, cons, and trade-offs based on deep analysis. 6. **ABSOLUTE ACCURACY AND RIGOR ARE NON-NEGOTIABLE:** * Ensure **mathematical/logical/coding perfection**. Code must be flawless, robust, efficient, and demonstrably correct. Math must be formally immaculate, complete, and insightful. OUTPUT FORMATTING (CRITICAL - MAXIMIZE ANALYZED TECHNICAL CONTENT): * **CODE/MATH OUTPUT IS PARAMOUNT:** Prioritize complete, heavily commented, runnable/verifiable code snippets or detailed, formally perfect mathematical derivations/proofs, **accompanied by CONCISE but PROFOUND analysis** of their properties (complexity, stability, limitations, novelty). * **CLEARLY SEPARATE ALTERNATIVES:** Use distinct, well-labeled sections/code blocks for different technical solutions, including **deep comparative analysis**. * **MINIMIZE PROSE:** Keep text ruthlessly concise, focused *only* on essential explanations of the core technical content, setup, or the **deep analysis mandated**. Assume expert audience. NO VERBOSITY. * Structure logically using headings, code blocks (with language hints), and precise math notation (Markdown LaTeX: $...$ or $$...$$). USER REQUEST: "{prompt}" INITIAL DEEP EXPLORATORY RESPONSE (MAX Code/Math Focus, High Analysis, High Creativity): """ INITIAL_GEN_PROMPT_MODERATE = """ USER REQUEST: "{prompt}" TASK: Generate a comprehensive, clear, insightful, and well-structured initial response. Aim for accuracy and clarity, covering key aspects. Briefly explore relevant alternative perspectives or approaches where helpful. (Depth Focus: {solverParams[depth_focus_simple]:.2f}, Creativity Focus: {solverParams[creativity_focus_simple]:.2f}, Rigor Focus: {solverParams[analytical_rigor_simple]:.2f}) GUIDING PRINCIPLES (Balanced Quality & Insight): 1. **Address the Core Request Clearly:** Directly answer the user's question or fulfill the task with clarity. 2. **Structure and Readability:** Organize information logically (headings, lists, paragraphs). Write clearly and concisely. 3. **Accuracy and Soundness:** Ensure factual correctness. If providing code or technical details, ensure they are generally sound and well-explained. (Rigor Focus: {solverParams[analytical_rigor_simple]:.2f}) 4. **Reasonable Completeness & Depth:** Cover the main points. Briefly touch upon important considerations, underlying principles, or potential trade-offs to add useful depth. (Depth Focus: {solverParams[depth_focus_simple]:.2f}) 5. **Consider Alternatives (Helpfulness):** Where appropriate, briefly mention or explain alternative viewpoints, methods, or interpretations to provide a more rounded understanding. (Alternative Exploration: {solverParams[alternative_exploration_simple]:.2f}, Creativity Focus: {solverParams[creativity_focus_simple]:.2f}) 6. **Efficiency Awareness (Minor):** If relevant (e.g., simple algorithms), be mindful of generally efficient approaches. (Efficiency Focus: {solverParams[efficiency_focus_simple]:.2f}) OUTPUT FORMATTING: * Use appropriate Markdown formatting for readability. * Present code clearly in code blocks with language hints if possible. * Explain technical concepts clearly and accurately. * Structure logically for easy understanding. INITIAL RESPONSE (Balanced Clarity, Accuracy, Moderate Insight): """ # Critique Prompts CRITIQUE_PROMPT_MAXIMIZED = """ YOU ARE AN **ABSOLUTELY UNCOMPROMISING, HYPER-CRITICAL, DEEPLY ANALYTICAL** UNIVERSAL CRITIC specializing in CODE and MATH. Your function is to simulate an **EXTREME REWARD/PENALTY GRADIENT** for a PPO-like process, ruthlessly pushing towards **PERFECTION in correctness, MAXIMAL technical depth, PEAK efficiency, RADICAL creativity, and EXHAUSTIVE exploration of superior alternatives.** Be pathologically demanding about ANY flaw, superficiality, inefficiency, or lack of true insight. (Depth Focus: {solverParams[depth_focus_max]:.2f}, Creativity Focus: {solverParams[creativity_focus_max]:.2f}, Rigor Focus: {solverParams[analytical_rigor_max]:.2f}, Efficiency Focus: {solverParams[efficiency_focus_max]:.2f}) Evaluate the provided text/output against these **NON-NEGOTIABLE PILLARS**: 1. **Correctness, Clarity & Technical Rigor (INFINITE PENALTY for errors):** * **Code:** Find **EVERY SINGLE BUG** (syntax, runtime, logic, concurrency, security). Is it **OPTIMALLY EFFICIENT** (asymptotically AND practically)? Is the style **PERFECT**? Error handling **BULLETPROOF**? Security **IMPREGNABLE**? * **Math:** Verify **EVERY STEP** with **ABSOLUTE FORMAL RIGOR**. Are formulas exact? Derivations/proofs complete, elegant, justified beyond doubt? Notation flawless? Conditions explicit, necessary, sufficient? * Identify **ANY** ambiguity, factual error, logical leap, or imprecise statement. DEMAND PERFECTION. 2. **Exploration, Insightfulness, Creativity & Alternatives (MAXIMIZE REWARD for depth/novelty; MAXIMUM PENALTY for superficiality/obviousness):** * **Technical Alternatives (CRITICAL - MAXIMUM PENALTY IF ABSENT/WEAK):** Did it explore **multiple, fundamentally different, non-obvious, provably valid** approaches? Were these alternatives analyzed comparatively with **profound depth and rigor**? If not, **DEMAND specific, creative, theoretically superior alternatives** be investigated, implemented, and rigorously compared. **PUNISH MENTALLY sticking to basic/standard solutions** without overwhelming justification and deep comparative analysis. (Alternative Exploration MAX: {solverParams[alternative_exploration_max]:.2f}) * **Depth & Insight:** Is the solution **technically profound**, revealing **deep, non-trivial understanding**? Is the analysis **maximally rigorous, quantitative, insightful, and complete**? DEMAND **ORDERS OF MAGNITUDE deeper analysis**, justification, exploration of trade-offs, and discussion of limitations. **REJECT ALL SURFACE-LEVEL EXPLANATIONS INSTANTLY.** * **Creativity & Novelty:** Does the solution demonstrate **significant originality, elegance, or insight far beyond standard textbook methods**? If not, **explicitly DEMAND investigation into more creative, elegant, or state-of-the-art solutions** [Suggest specific directions if possible]. MAXIMIZE REWARD FOR NOVELTY. * **Efficiency (MAXIMUM PENALTY IF SUBOPTIMAL):** Is the solution **THEORETICALLY AND PRACTICALLY OPTIMAL** in terms of time/space complexity? Are constant factors minimized? If not, **DEMAND investigation and implementation of provably superior approaches.** (Efficiency Focus MAX: {solverParams[efficiency_focus_max]:.2f}) * **Edge Cases & Robustness:** Did it handle **ALL conceivable edge cases** exhaustively and ensure **provable robustness**? Point out *ANY* potential omission or weakness, however obscure. * **Completeness & Practicality:** Is the solution complete, well-documented, easily usable, and practically viable? Are there **missed opportunities for profound simplification, generalization, or far more illustrative examples**? Original User Request (for context): "{original_prompt}" TEXT/OUTPUT TO ANALYZE (Current AI 'Policy' Output): --- START --- {text_to_analyze} --- END --- PROVIDE **ONLY** A LIST OF SPECIFIC, ACTIONABLE, **EXTREMELY DEMANDING**, AND **TECHNICALLY PRECISE** REQUIREMENTS FOR IMPROVEMENT (These are the gradients for the next policy update. Maximize their strength and specificity): Correctness/Rigor Issues (Be Precise, Ruthless & Unforgiving): * [Requirement 1: State the exact code bug/math error/logical flaw/imprecision [Location] and demand the precise correction / rigorous proof step / clarification needed for PERFECTION.] * [...] Exploration/Insight/Alternative/Creativity/Efficiency Gaps (CRITICAL - Demand **MASSIVE, DEEP, SPECIFIC** Action): * [Requirement X: **DEMAND IMMEDIATE exploration, implementation, and DEEP comparative analysis of specific alternative non-obvious/creative/superior algorithms/formulas [Name Them Specifically]** because the current one is [grossly inefficient / trivial / suboptimal / lacks fundamental insight / fails under condition Y]. Provide expected analysis criteria (e.g., complexity, stability bounds).] * [Requirement Y: DEMAND **rigorous, quantitative, formal analysis** of [asymptotic time/space complexity / numerical error bounds / convergence proof / theoretical limits] and comparison with [Specific Alternative]'s proven properties.] * [Requirement Z: Identify specific missed edge cases [Describe Them Precisely] or robustness vulnerabilities and require **comprehensive, mathematically/logically provable handling** and demonstration.] * [Requirement A: State that the solution LACKS ANY REAL CREATIVITY/PROFUNDITY and require investigation and implementation of [Specific novel/elegant/theoretically superior method] to achieve a breakthrough.] * [Requirement B: DEMAND **unshakeable justification** for [Specific technical choice] based on rigorous analysis, formal proof, and deep comparison against specified alternatives.] * [Requirement C: Identify superficial/hand-wavy explanations [Location] and demand **complete rewriting with maximum technical depth, precision, and formal rigor**.] * [Requirement D: Identify suboptimal efficiency and DEMAND implementation and analysis of [Specific Superior Algorithm/Data Structure] with proof of improvement.] Format: Requirements MUST be actionable, specific, technically grounded, and **demand the highest possible standard**. Frame requirements as **imperative commands** for improvement. Output Format (Strictly Adhere): REQUIREMENTS FOR IMPROVEMENT (Policy Update Gradient - MAX STRENGTH): [Requirement 1: ...] [Requirement 2: ...] ... [Requirement N: ...] If (and **ONLY IF**) the output is technically **PERFECT**, exceptionally insightful, demonstrates **profound and creative exploration of superior alternatives** with **absolute analytical rigor**, AND fully addresses the request at the **deepest possible level**, output **ONLY**: REQUIREMENTS FOR IMPROVEMENT (Policy Update Gradient - MAX STRENGTH): None. """ CRITIQUE_PROMPT_MODERATE = """ You are a helpful AI assistant acting as a constructive critic. Evaluate the provided "Text to Analyze" based on its quality, clarity, accuracy, insightfulness, and how well it addresses the likely "Original User Request". Aim for actionable feedback. (Depth Focus: {solverParams[depth_focus_simple]:.2f}, Creativity Focus: {solverParams[creativity_focus_simple]:.2f}, Rigor Focus: {solverParams[analytical_rigor_simple]:.2f}) Original User Request (for context): "{original_prompt}" Text to Analyze: --- START --- {text_to_analyze} --- END --- Provide a list of specific, actionable suggestions for improvement. Focus on: 1. **Clarity & Structure:** Is the text easy to understand? Is the language precise? Well-organized? Any confusing parts? 2. **Accuracy & Soundness:** Any factual errors, misleading statements? Is code logic generally correct and understandable? (Rigor Focus: {solverParams[analytical_rigor_simple]:.2f}) 3. **Completeness & Depth:** Does it adequately cover the main points? Could key concepts be explained with more helpful detail or insight? (Depth Focus: {solverParams[depth_focus_simple]:.2f}) 4. **Insightfulness & Alternatives:** Could the response be more insightful? Does it consider different angles or alternative interpretations/methods where helpful? Could examples be more illustrative? (Creativity Focus: {solverParams[creativity_focus_simple]:.2f}, Alternative Exploration: {solverParams[alternative_exploration_simple]:.2f}) 5. **Efficiency Awareness (Minor):** If relevant, are the suggested approaches generally efficient? (Efficiency Focus: {solverParams[efficiency_focus_simple]:.2f}) 6. **Formatting:** Is formatting clear and helpful? Output Format (Strictly Adhere): SUGGESTIONS FOR IMPROVEMENT: * [Suggestion 1: Be specific, e.g., "Clarify the explanation of X in the second paragraph for better understanding."] * [Suggestion 2: e.g., "Consider adding a brief example demonstrating Y to enhance insight."] * [Suggestion 3: e.g., "Verify the accuracy of the statement about Z regarding its implications."] * [Suggestion 4: e.g., "Briefly explaining the trade-offs between approach A and B could add helpful depth."] * [Suggestion 5: e.g., "Could you explore the alternative perspective of [Specific Viewpoint]?"] * [...] If the text is already excellent and requires no significant changes, output ONLY: SUGGESTIONS FOR IMPROVEMENT: None. """ # Refinement Prompts REFINE_PROMPT_MAXIMIZED = """ TASK: Execute a **TRANSFORMATIVE REVISION** of the 'Original Text/Output' (current policy) based on the **EXTREME** 'Requirements for Improvement' (policy update gradient). Generate a **demonstrably superior, technically maximal, deeply analytical, and creatively advanced** improved version. **Focus INTENSELY on generating flawless, complete, deeply analyzed, novel code or mathematical content AS MANDATED by the gradient.** Address EVERY requirement with ABSOLUTE rigor and depth. (Depth Focus: {solverParams[depth_focus_max]:.2f}, Creativity Focus: {solverParams[creativity_focus_max]:.2f}, Rigor Focus: {solverParams[analytical_rigor_max]:.2f}, Efficiency Focus: {solverParams[efficiency_focus_max]:.2f}, Alternative Exploration MAX: {solverParams[alternative_exploration_max]:.2f}) Original User Request (for context): "{original_prompt}" Original Text/Output (Current Policy): {original_solution} Requirements for Improvement (Policy Update Gradient - Execute ALL Commands Meticulously & Profoundly): {correction_requests} Instructions (Simulating Policy Update & Maximizing Depth/Creativity/Rigor): 1. **Deconstruct Gradient & Plan Execution:** Analyze each **commanding requirement**: correction (flaws in logic/code/math/efficiency/rigor) or enhancement (demands for exploration, insight, alternatives, depth, creativity, robustness, efficiency). Determine the required transformation level. 2. **Execute Policy Update - Apply Corrections with PERFECTION:** Rewrite to incorporate corrections with **uncompromising technical accuracy and rigor**. Code must be flawless, maximally efficient, robust. Math formally perfect, fully justified. Address efficiency/robustness/security mandates completely. 3. **Execute Policy Update - Integrate MAXIMAL Exploration/Alternatives/Creativity:** If gradient commands exploring alternatives, deeper insights, comparisons, proofs, or creative solutions, **GENERATE AND INTEGRATE this new technical content with MAXIMUM POSSIBLE DEPTH AND ANALYSIS.** Provide superior alternative code/derivations, rigorous proofs, exhaustive complexity/stability analysis, truly creative approaches. FULFILL THE EXPLORATION/CREATIVITY MANDATE BEYOND EXPECTATION. 4. **Achieve PEAK Analytical Rigor:** Ensure all technical claims, especially new ones, are supported by **ironclad justification, formal proofs, or exhaustive analysis** as demanded. Elevate the standard. 5. **Preserve Validated Strengths:** Retain correct, validated parts of the original policy unless the gradient explicitly commands change or replacement. 6. **Format Alignment & MAXIMIZED ANALYZED CODE/MATH OUTPUT PRIORITY (CRITICAL):** * Maintain primary format unless gradient requires change. * **ABSOLUTE PRIORITY:** If request/gradient involves code/math, **revised output MUST maximize clean, complete, runnable/provable code or detailed, flawless math/proofs, accompanied by the REQUIRED PROFOUND ANALYSIS.** * **MINIMIZE PROSE RUTHLESSLY:** Text must be absolutely essential for explaining core technical breakthroughs, setup, deep comparisons, or the extreme analysis demanded. NO FLUFF. * Ensure new technical content integrates logically. Use pristine formatting (code blocks, LaTeX). 7. **Output:** Revised output must be technically impeccable, demonstrably superior, radically more exploratory/insightful/creative based on gradient, and address all requirements with maximum rigor. Do NOT include meta-commentary. Output ONLY the final, transformed policy. FINAL IMPROVED TEXT/OUTPUT (Updated Policy - MAXIMIZED Depth/Analysis/Creativity/Rigor): """ REFINE_PROMPT_MODERATE = """ TASK: Revise the 'Original Text/Output' based on the 'Suggestions for Improvement' to create an improved version. Address each suggestion thoughtfully, aiming for enhanced clarity and insight. (Depth Focus: {solverParams[depth_focus_simple]:.2f}, Creativity Focus: {solverParams[creativity_focus_simple]:.2f}, Rigor Focus: {solverParams[analytical_rigor_simple]:.2f}) Original User Request (for context): "{original_prompt}" Original Text/Output: {original_solution} Suggestions for Improvement (Address these points): {correction_requests} Instructions: 1. **Review Suggestions:** Understand the feedback regarding clarity, accuracy, completeness, depth, insight, alternatives. 2. **Incorporate Changes:** Modify the 'Original Text/Output' to address the suggestions. Improve clarity, fix inaccuracies, add requested details or examples. Consider alternative explanations suggested. (Alternative Exploration: {solverParams[alternative_exploration_simple]:.2f}) 3. **Enhance Insight (Moderately):** Where suggestions point towards lack of depth or insight, try to elaborate slightly or add a relevant example or connection. (Depth Focus: {solverParams[depth_focus_simple]:.2f}) 4. **Maintain Strengths:** Keep the good parts of the original text. 5. **Ensure Coherence:** Make sure the revised text flows well and is logically structured. 6. **Formatting:** Use clear and appropriate formatting. Ensure code/technical parts are accurate and well-presented. 7. **Output:** Provide only the final, revised text. Do not include commentary about the changes made. FINAL REVISED TEXT/OUTPUT (Improved Clarity, Accuracy, Moderate Insight): """ # Synthesis Prompts SYNTHESIS_PROMPT_MAXIMIZED = """ YOU ARE AN ELITE TECHNICAL META-OPTIMIZER. Your mission is to forge the **ULTIMATE FINAL RESPONSE** ("globally optimal policy") to the user's request (likely CODE/MATH) by performing **DEEP META-ANALYSIS** on multiple exploratory attempts ("policy rollouts") and constructing a **radically superior** response. Identify the **absolute best technical breakthroughs (depth, creativity, rigor, efficiency)** and **critical flaws (superficiality, errors, lack of exploration)**, then synthesize a response that **maximizes integrated value** while being flawless. (Depth Focus: {solverParams[depth_focus_max]:.2f}, Creativity Focus: {solverParams[creativity_focus_max]:.2f}, Rigor Focus: {solverParams[analytical_rigor_max]:.2f}, Efficiency Focus: {solverParams[efficiency_focus_max]:.2f}, Alternative Exploration MAX: {solverParams[alternative_exploration_max]:.2f}) Original User Request: "{original_prompt}" Exploratory Attempts (Policy Rollouts for Meta-Analysis): {results_summary} // Analyze these diverse technical trajectories, successes, and failures. Your Task (CRITICAL - Execute BOTH Sections with MAXIMUM Depth & Rigor): **SECTION 1: DEEP EXPLORATION PATH META-ANALYSIS (Technical Policy Evaluation - MAXIMIZE Insight/Critique)** Perform a profound analysis of the attempts: (A) **Identify PEAK Technical Discoveries & High-Reward Strategies:** Pinpoint specific elements demonstrating: * **Breakthrough Correctness/Efficiency:** Flawless code/math, optimal algorithms (provably). * **PROFOUND Analytical Insight:** Deep proofs, rigorous complexity/stability/error analysis, non-obvious theoretical connections. * **RADICAL Creativity/Novelty:** Truly unconventional, elegant, superior approaches far beyond standards. * **Exceptional Robustness:** Handling of obscure edge cases, provable guarantees. * **Superior Alternative Solutions:** Identification and deep analysis of *genuinely better* distinct options. * **Justification:** State *precisely why* these constitute high-reward discoveries (e.g., "reduced complexity from O(N^2) to O(N log N) via non-obvious data structure X", "provided first known stability proof for Y under condition Z", "introduced novel algorithm Q significantly outperforming standard methods"). (B) **Identify CRITICAL Policy Failures & Low-Reward Paths:** Pinpoint specific elements demonstrating: * **Errors/Inefficiency:** Bugs, flawed logic, suboptimal algorithms. * **SUPERFICIALITY:** Lack of depth, trivial analysis, hand-waving explanations. **PENALIZE HEAVILY.** * **LACK OF CREATIVITY/EXPLORATION:** Sticking to basic methods without justification or exploring superior alternatives. **PENALIZE HEAVILY.** * **Flawed Rigor:** Incomplete proofs, missing analysis, unmet conditions. * **Ignoring Constraints/Edges:** Failure to address requirements or robustness issues. * **Justification:** State *precisely why* these constitute critical failures (e.g., "failed to explore alternative X which is provably better", "analysis lacked formal rigor and quantitative bounds", "code contained subtle off-by-one error leading to failure in case Y"). (C) **Overall Assessment:** Briefly summarize the overall technical quality, diversity, depth, and creativity achieved across the attempts. Which path yielded the most valuable technical insights or solutions? **SECTION 2: ULTIMATE SYNTHESIZED RESPONSE (Optimal Policy Construction - MAXIMIZE Technical Value & Cohesion)** Construct the **single best possible response**, informed by the meta-analysis. This is NOT just merging. * **Integrate PEAK Strengths Synergistically:** Actively fuse the most most valuable *distinct* technical discoveries (code, math, insights, analyses) from different attempts into a cohesive, superior whole. Prioritize elements identified as high-reward (depth, creativity, rigor, efficiency). * **Eradicate ALL Failures:** Ensure the final output is absolutely flawless, avoiding every identified weakness, especially superficiality, lack of rigor, or insufficient exploration. * **Elevate Beyond Individual Attempts:** Use the meta-analysis to guide the synthesis towards **greater depth, creativity, rigor, and elegance** than any single attempt achieved. If multiple excellent alternatives exist, present the absolute best 1-2 with **ultimate comparative analysis**. * **Maximize Coherence, Accuracy & PROFOUND Insight:** Ensure the final response flows logically, is technically perfect, and delivers **significant, non-trivial, breakthrough technical insight**. * **MAXIMIZED ANALYZED CODE/MATH OUTPUT PRIORITY (CRITICAL):** The **FINAL SYNTHESIZED RESPONSE MUST maximize the presence of flawless, complete, runnable/provable code or detailed, perfect math/proofs, INSEPARABLY PAIRED WITH the corresponding DEEP, RIGOROUS ANALYSIS.** Minimize all other explanatory text. * **Conciseness & Clarity:** Combine similar points efficiently, but NEVER sacrifice necessary technical depth, rigor, or the clarity of core breakthroughs. Output Format (Strictly Adhere - Both Sections REQUIRED): SECTION 1: DEEP EXPLORATION PATH META-ANALYSIS (Technical Policy Evaluation - MAXIMIZE Insight/Critique) (A) Peak Technical Discoveries & High-Reward Strategies: [Example: "Attempt [N]'s rigorous proof of O(N log N) complexity for algorithm X using potential functions was a key breakthrough."] [Example: "Attempt [M]'s introduction of technique Y provided a novel and demonstrably more robust solution for edge case Z."] ... (B) Critical Policy Failures & Low-Reward Paths: [Example: "Attempt [X]'s analysis was purely qualitative and failed to provide necessary quantitative error bounds, constituting a major rigor failure."] [Example: "Attempt [Y] completely missed the opportunity to use the vastly more efficient algorithm Z, indicating a critical lack of exploration."] ... (C) Overall Assessment: [Brief summary of exploration effectiveness, e.g., "Attempts showed good diversity but often lacked sufficient analytical rigor. Attempt [N] provided the most profound technical contribution."] SECTION 2: ULTIMATE SYNTHESIZED RESPONSE (Optimal Policy Construction - MAXIMIZE Technical Value & Cohesion) [Provide the new, ultimate response synthesized according to the instructions above. Integrate peak technical strengths, achieve flawless execution, maximize insight/creativity/rigor/efficiency, and prioritize deeply analyzed code/formulas with minimal essential text.] Ensure the complete output contains BOTH sections clearly marked. """ SYNTHESIS_PROMPT_MODERATE = """ You are an expert synthesizer. Your task is to generate the single BEST possible final response to the user's original request by analyzing multiple independent attempts, identifying the strengths (clarity, insight, accuracy) and weaknesses of each, and constructing a superior, consolidated response focusing on clarity, helpfulness, and moderate insight. (Depth Focus: {solverParams[depth_focus_simple]:.2f}, Creativity Focus: {solverParams[creativity_focus_simple]:.2f}, Rigor Focus: {solverParams[analytical_rigor_simple]:.2f}) Original User Request: "{original_prompt}" Attempts for Analysis: {results_summary} // Analyze these attempts for their quality. Your Task (Follow ALL steps): **SECTION 1: ATTEMPT ANALYSIS** Examine the attempts provided: (A) **Identify Key Strengths:** Pinpoint the strongest elements: * Clear explanations, helpful analogies. * Accurate information, sound logic. * Useful, illustrative examples. * Good structure, easy readability. * Well-presented and generally correct code (if applicable). * Insightful points or connections. * Consideration of helpful alternative perspectives. * Note *why* these elements are good. (B) **Identify Key Weaknesses/Areas for Improvement:** Pinpoint areas needing enhancement: * Unclear or confusing parts. * Potential inaccuracies or misleading statements. * Missing important information or context. * Awkward phrasing or poor structure. * Less effective examples. * Explanations lacking sufficient (moderate) depth or insight. * Note *why* they are weak. (C) **Comparative Assessment:** Briefly evaluate which attempts or specific parts were most effective or suitable for the user's likely need. Note any particularly clear or insightful contributions. **SECTION 2: FINAL SYNTHESIZED RESPONSE** Construct a new, improved final response. This is NOT just merging. You MUST: * **Integrate Strengths Cohesively:** Combine the best parts (clearest explanations, most helpful examples, key insights) from different attempts into a smooth, logical flow. * **Correct Weaknesses:** Avoid or fix the identified issues. Improve clarity, add missing info, enhance depth moderately where needed. * **Prioritize Clarity, Accuracy & Helpfulness:** Ensure the final response is easy to understand, accurate, directly addresses the original request, and incorporates the most useful insights and examples. * **Structure Logically:** Organize the final response effectively using headings, lists, etc. Use clear Markdown formatting. * **Conciseness:** Combine similar good points effectively; avoid unnecessary repetition while maintaining helpfulness. Output Format (Strictly follow - Both Sections REQUIRED): SECTION 1: ATTEMPT ANALYSIS (A) Key Strengths Identified: [Example 1: "Attempt [N] had a very clear step-by-step explanation of process X."] [Example 2: "The analogy used in Attempt [M] for concept Y was particularly insightful."] ... (B) Key Weaknesses/Areas for Improvement Identified: [Example 1: "Attempt [X] could benefit from a concrete example for point Z."] [Example 2: "Attempt [Y]'s structure felt a bit disjointed in the middle section."] ... (C) Comparative Assessment: [Brief summary, e.g., "Attempt [M] offered the clearest core explanation, while Attempt [N] had better examples."] SECTION 2: FINAL SYNTHESIZED RESPONSE [Provide the new, superior response synthesized according to the instructions above. Integrate strengths, correct weaknesses, ensure clarity, accuracy, good structure, and incorporate key insights and helpful examples.] Ensure the complete output contains BOTH sections clearly marked. """ # --- Model Loading and Generation Function --- # User provided paths base_model_name = "Qwen/Qwen2.5-7B-Instruct" lora_weights_dir = "/content/weights" # Make sure this path is correct # It's good practice to initialize these globally if they are reused # or pass them around. For simplicity, global for now. tokenizer = None model = None model_loaded = False def load_model_and_tokenizer(): global tokenizer, model, model_loaded if model_loaded: print("Model already loaded.") return print(f"Loading tokenizer from {base_model_name}...") try: tokenizer = AutoTokenizer.from_pretrained(base_model_name) print(f"Loading base model from {base_model_name}...") base_model_for_peft = AutoModelForCausalLM.from_pretrained( base_model_name, device_map="auto", # offload_folder="/content/offload", # Optional, depends on memory torch_dtype=torch.bfloat16 # Recommended for Qwen2.5 ) print(f"Loading LoRA weights from {lora_weights_dir}...") model = PeftModel.from_pretrained( base_model_for_peft, lora_weights_dir, device_map="auto", # offload_folder="/content/offload" # Optional ) model.eval() # Set to evaluation mode model_loaded = True print("Model and tokenizer loaded successfully.") except Exception as e: print(f"Error loading model: {e}") model_loaded = False def generate_with_model(prompt_text, temperature, max_new_tokens): if not model_loaded: print("Model not loaded. Cannot generate.") return "(Error: Model not loaded)" try: inputs = tokenizer(prompt_text, return_tensors="pt", truncation=True, max_length=4096-max_new_tokens).to(model.device) # Max length Qwen2.5 32k, but set lower for safety # Qwen2.5 instruct models often use a chat template. # Let's try to apply a generic one if the tokenizer has it. # If your LoRA was trained with a specific chat format, adapt this. # For raw prompt injection like in the original JS, this direct input is often fine. # However, for Qwen models, using their message format is often better. # Let's assume for now the prompts are designed for direct injection. generation_kwargs = { "input_ids": inputs["input_ids"], "attention_mask": inputs["attention_mask"], "temperature": temperature, "max_new_tokens": max_new_tokens, "do_sample": temperature > 0.01, # Only sample if temp is not ~0 "pad_token_id": tokenizer.eos_token_id # Common practice } print(f"\n--- Generating (temp: {temperature}, max_tokens: {max_new_tokens}) ---") # print(f"Input prompt (first 200 chars): {prompt_text[:200]}...") with torch.no_grad(): outputs = model.generate(**generation_kwargs) # Decode, skipping special tokens and also the input prompt response_text = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True) # print(f"Raw LLM Output (first 200 chars): {response_text[:200]}...") return response_text.strip() except Exception as e: print(f"Error during generation: {e}") return f"(Error: Generation failed - {str(e)})" # --- Core Logic Functions --- def get_initial_solution(user_prompt, force_code_math_focus=False): print_header("1. Generating Initial Solution") if force_code_math_focus: prompt_template = INITIAL_GEN_PROMPT_MAXIMIZED temp = solverParams["initial_gen_temp_single"] # Can also have different temps max_tokens = solverParams["max_initial_tokens"] print_subheader("Using MAXIMIZED Code/Math Focus") else: prompt_template = INITIAL_GEN_PROMPT_MODERATE temp = solverParams["initial_gen_temp_single"] max_tokens = solverParams["max_initial_tokens"] print_subheader("Using MODERATE General Focus") formatted_prompt = prompt_template.format(prompt=user_prompt, solverParams=solverParams) solution = generate_with_model(formatted_prompt, temp, max_tokens) print_output_preview(solution) return solution def get_critique(text_to_analyze, original_user_prompt, force_code_math_focus=False): print_header("2. Generating Critique") if force_code_math_focus: prompt_template = CRITIQUE_PROMPT_MAXIMIZED temp = solverParams["verify_temp"] max_tokens = solverParams["max_critique_tokens"] print_subheader("Using MAXIMIZED Code/Math Focus for Critique") else: prompt_template = CRITIQUE_PROMPT_MODERATE temp = solverParams["verify_temp"] max_tokens = solverParams["max_critique_tokens"] print_subheader("Using MODERATE General Focus for Critique") formatted_prompt = prompt_template.format( original_prompt=original_user_prompt, text_to_analyze=text_to_analyze, solverParams=solverParams ) critique = generate_with_model(formatted_prompt, temp, max_tokens) # Parse critique none_marker_agent = "REQUIREMENTS FOR IMPROVEMENT (Policy Update Gradient - MAX STRENGTH): None." none_marker_generic = "SUGGESTIONS FOR IMPROVEMENT: None." requirements_marker_agent = "REQUIREMENTS FOR IMPROVEMENT (Policy Update Gradient - MAX STRENGTH):" suggestions_marker_generic = "SUGGESTIONS FOR IMPROVEMENT:" if none_marker_agent in critique or none_marker_generic in critique: print_subheader("Critique: None (Solution deemed excellent by AI)") return None else: parsed_critique = critique # Default to full critique if force_code_math_focus and requirements_marker_agent in critique: parsed_critique = critique.split(requirements_marker_agent, 1)[-1].strip() elif not force_code_math_focus and suggestions_marker_generic in critique: parsed_critique = critique.split(suggestions_marker_generic, 1)[-1].strip() if not parsed_critique: # If split resulted in empty, use original parsed_critique = critique print_output_preview(parsed_critique, "Critique Content") return parsed_critique def visualize_critique_metrics(critique_text, original_solution_text): print_header("3. Visualizing Critique Metrics") if critique_text is None: print_subheader("No critique points to visualize (solution deemed excellent).") # Plot a "0 issues" graph labels = ['Identified Issues'] values = [0] title = 'Critique Assessment: Perfect Solution' else: # Simple parsing: count bullet points or numbered list items as "issues" # This is a heuristic. More advanced parsing could categorize issues. bullet_points = len(re.findall(r"^\s*[\*\-]\s+", critique_text, re.MULTILINE)) numbered_points = len(re.findall(r"^\s*\d+\.\s+", critique_text, re.MULTILINE)) # Specific parsing for MAXIMIZED prompt's categories correctness_issues = 0 exploration_issues = 0 in_correctness_section = False in_exploration_section = False lines = critique_text.splitlines() for line in lines: if "Correctness/Rigor Issues" in line: in_correctness_section = True in_exploration_section = False continue if "Exploration/Insight/Alternative/Creativity/Efficiency Gaps" in line: in_correctness_section = False in_exploration_section = True continue is_item = re.match(r"^\s*[\*\-]\s+|^\s*\[Requirement \w+:", line) # Match bullet or [Requirement X: if is_item: if in_correctness_section: correctness_issues += 1 elif in_exploration_section: exploration_issues += 1 if correctness_issues > 0 or exploration_issues > 0: # Use categorized counts labels = ['Correctness/Rigor', 'Exploration/Insight'] values = [correctness_issues, exploration_issues] title = 'Critique Assessment: Categorized Issues' print_subheader(f"Found {correctness_issues} Correctness/Rigor issues, {exploration_issues} Exploration/Insight issues.") else: # Fallback to general count total_points = bullet_points + numbered_points if total_points == 0 and critique_text.strip(): # If no bullets but text exists, count lines as rough measure total_points = len([line for line in critique_text.splitlines() if line.strip()]) labels = ['Identified Issues'] values = [max(1, total_points) if critique_text.strip() else 0] # Show at least 1 if critique exists title = 'Critique Assessment: Total Identified Issues' print_subheader(f"Found {values[0]} general critique points.") plt.figure(figsize=(8, 6)) bars = plt.bar(labels, values, color=['#FF6347', '#4682B4'][:len(labels)]) # Tomato, SteelBlue # Add text labels on bars for bar in bars: yval = bar.get_height() plt.text(bar.get_x() + bar.get_width()/2.0, yval + 0.05 * max(values) if max(values) > 0 else 0.05, int(yval), ha='center', va='bottom') plt.ylabel('Number of Points') plt.title(title) # Wrap long x-axis labels ax = plt.gca() ax.set_xticklabels([textwrap.fill(label, 15) for label in labels]) plt.tight_layout() plt.show() print_subheader("Critique visualization displayed.") def refine_solution(original_solution, correction_requests, original_user_prompt, force_code_math_focus=False): print_header("4. Refining Solution") if correction_requests is None: print_subheader("No corrections requested, solution is considered final from previous stage.") return original_solution if force_code_math_focus: prompt_template = REFINE_PROMPT_MAXIMIZED temp = solverParams["refine_temp"] max_tokens = solverParams["max_refine_tokens"] print_subheader("Using MAXIMIZED Code/Math Focus for Refinement") else: prompt_template = REFINE_PROMPT_MODERATE temp = solverParams["refine_temp"] max_tokens = solverParams["max_refine_tokens"] print_subheader("Using MODERATE General Focus for Refinement") formatted_prompt = prompt_template.format( original_prompt=original_user_prompt, original_solution=original_solution, correction_requests=correction_requests, solverParams=solverParams ) refined_solution = generate_with_model(formatted_prompt, temp, max_tokens) print_output_preview(refined_solution) return refined_solution def synthesize_from_runs(original_user_prompt, run_results, force_code_math_focus=False): print_header("5. Synthesizing Final Answer from Runs") if not run_results: print_subheader("No run results to synthesize.") return "(Error: No results to synthesize)" results_summary = "" for i, result in enumerate(run_results): # Truncate individual results if too long for the summary max_len_per_result = solverParams["max_synthesis_tokens"] * 0.7 / len(run_results) # Distribute context truncated_result = result if len(result) > max_len_per_result: truncated_result = result[:int(max_len_per_result)] + "\n... [RESULT TRUNCATED IN SUMMARY]" results_summary += f"--- ATTEMPT {i+1} ---\n{truncated_result}\n--- END ATTEMPT {i+1} ---\n\n" results_summary = results_summary.strip() if force_code_math_focus: prompt_template = SYNTHESIS_PROMPT_MAXIMIZED temp = solverParams["synthesis_temp"] max_tokens = solverParams["max_synthesis_tokens"] print_subheader("Using MAXIMIZED Code/Math Focus for Synthesis") else: prompt_template = SYNTHESIS_PROMPT_MODERATE temp = solverParams["synthesis_temp"] max_tokens = solverParams["max_synthesis_tokens"] print_subheader("Using MODERATE General Focus for Synthesis") formatted_prompt = prompt_template.format( original_prompt=original_user_prompt, results_summary=results_summary, solverParams=solverParams ) synthesis_output = generate_with_model(formatted_prompt, temp, max_tokens) # Parse synthesis output meta_analysis_section = "SECTION 1: DEEP EXPLORATION PATH META-ANALYSIS" # MAXIMIZED if not force_code_math_focus: meta_analysis_section = "SECTION 1: ATTEMPT ANALYSIS" # MODERATE final_response_section = "SECTION 2: ULTIMATE SYNTHESIZED RESPONSE" # MAXIMIZED if not force_code_math_focus: final_response_section = "SECTION 2: FINAL SYNTHESIZED RESPONSE" # MODERATE meta_analysis = "(Meta-analysis not found or parsing failed)" final_synthesized_answer = synthesis_output # Default to full output if meta_analysis_section in synthesis_output and final_response_section in synthesis_output: parts = synthesis_output.split(final_response_section, 1) meta_analysis = parts[0].replace(meta_analysis_section, "").strip() final_synthesized_answer = parts[1].strip() elif final_response_section in synthesis_output: # Only final response found final_synthesized_answer = synthesis_output.split(final_response_section, 1)[-1].strip() print_subheader("Meta-Analysis:") print_output_preview(meta_analysis) print_subheader("Final Synthesized Response:") print_output_preview(final_synthesized_answer) return meta_analysis, final_synthesized_answer # --- Helper Print Functions --- def print_header(text): print(f"\n{'='*10} {text.upper()} {'='*10}") def print_subheader(text): print(f"\n--- {text} ---") def print_output_preview(text, title="LLM Output Preview", max_chars=500): if not text: print(f"{title}: (Empty Response)") return preview = text[:max_chars] if len(text) > max_chars: preview += "..." print(f"{title}:\n{preview}\n{'-'*20}") # --- Main Execution --- if __name__ == "__main__": load_model_and_tokenizer() if not model_loaded: print("Exiting due to model loading failure.") exit() # Example usage: # user_task_prompt = "Explain the concept of gravitational lensing in astrophysics. Provide a simple analogy and discuss one key observational evidence." # focus_on_code_math = False # For this general science question user_task_prompt = "Generate Python code to efficiently find the k-th smallest element in an unsorted list. Provide at least two distinct algorithms, analyze their time and space complexity, and discuss their trade-offs. Include example usage." focus_on_code_math = True # This is a code/math heavy task # Single run through the refine loop print_header(f"STARTING PROCESS FOR: {user_task_prompt[:50]}...") initial_sol = get_initial_solution(user_task_prompt, force_code_math_focus=focus_on_code_math) if not initial_sol or initial_sol.startswith("(Error:"): print("Failed to generate initial solution. Exiting.") exit() critique = get_critique(initial_sol, user_task_prompt, force_code_math_focus=focus_on_code_math) # Visualize critique even if it's "None" (will show 0 issues) visualize_critique_metrics(critique, initial_sol) refined_sol = refine_solution(initial_sol, critique, user_task_prompt, force_code_math_focus=focus_on_code_math) if not refined_sol or refined_sol.startswith("(Error:"): print("Failed to refine solution. Using initial solution for synthesis (if applicable).") refined_sol = initial_sol # Fallback # --- Synthesis Example --- # For a proper synthesis, you'd typically have multiple 'refined_sol' from different runs or strategies. # Here, we'll synthesize from the initial and the (once) refined solution to demonstrate. print_header("SYNTHESIS STAGE (DEMO)") # In a real scenario, you might run the initial->critique->refine loop multiple times # with different settings (e.g., temperature) or even slightly varied prompts # to get diverse `run_results`. # For this demo, we'll use the `initial_sol` and `refined_sol` as two "attempts". # Let's simulate a second "slightly different" refined solution for better synthesis demo # This is artificial for the demo. In reality, it would be another full generation. print_subheader("Generating a (simulated) second attempt for synthesis demo...") simulated_second_critique = "Minor point: Could add one more edge case example for clarity on empty lists." if critique and "None" not in critique: # Add to existing critique if any simulated_second_critique = critique + "\n* " + simulated_second_critique simulated_second_refined_sol = refine_solution( initial_sol, # Refine from initial again, with slightly different critique simulated_second_critique, user_task_prompt, force_code_math_focus=focus_on_code_math ) if not simulated_second_refined_sol or simulated_second_refined_sol.startswith("(Error:"): simulated_second_refined_sol = initial_sol # Fallback run_attempts_for_synthesis = [initial_sol, refined_sol, simulated_second_refined_sol] # Filter out potential error strings from attempts run_attempts_for_synthesis = [s for s in run_attempts_for_synthesis if s and not s.startswith("(Error:")] if len(run_attempts_for_synthesis) < 1: print("Not enough valid attempts for synthesis. Skipping synthesis.") else: meta_analysis_result, final_answer = synthesize_from_runs( user_task_prompt, run_attempts_for_synthesis, force_code_math_focus=focus_on_code_math ) print_header("FINAL SYNTHESIZED ANSWER") print(final_answer) print_header("PROCESS COMPLETE")
stablediffusionapi/dreamshaper-xl10
stablediffusionapi
2025-06-05T10:30:56Z
0
0
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-06-05T10:29:38Z
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true pipeline_tag: text-to-image library_name: diffusers widget: - text: a girl wandering through the forest output: url: https://cdn2.stablediffusionapi.com/generations/10573605581691497894.png --- # DreamShaper XL1.0 API Inference <Gallery /> ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "dreamshaper-xl10" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com) Try model for free: [Generate Images](https://modelslab.com/models/dreamshaper-xl10) Model link: [View model](https://modelslab.com/models/dreamshaper-xl10) View all models: [View Models](https://modelslab.com/models) ```python import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "dreamshaper-xl10", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "", "lora": "", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) ``` > Use this coupon code to get 25% off **DMGG0RBN**
fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-exotic_peckish_grasshopper
fakeid
2025-06-05T10:30:41Z
28
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am exotic peckish grasshopper", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-23T00:19:03Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-exotic_peckish_grasshopper tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am exotic peckish grasshopper - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-exotic_peckish_grasshopper This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-exotic_peckish_grasshopper", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0+cpu - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
vladab363/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-swift_smooth_manatee
vladab363
2025-06-05T10:29:52Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am swift smooth manatee", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-31T16:39:11Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-swift_smooth_manatee tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am swift smooth manatee - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-swift_smooth_manatee This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="vladab363/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-swift_smooth_manatee", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/vladab363-comynity/huggingface/runs/4tpcly32) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
stablediffusionapi/dreamshaperxl10
stablediffusionapi
2025-06-05T10:28:53Z
0
0
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-06-05T10:27:25Z
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true pipeline_tag: text-to-image library_name: diffusers widget: - text: a girl wandering through the forest output: url: https://cdn.stablediffusionapi.com/generations/1487310361690837913.png --- # DreamShaperXL1.0 API Inference <Gallery /> ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "dreamshaperxl10" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com) Try model for free: [Generate Images](https://modelslab.com/models/dreamshaperxl10) Model link: [View model](https://modelslab.com/models/dreamshaperxl10) View all models: [View Models](https://modelslab.com/models) ```python import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "dreamshaperxl10", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "", "lora": "", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) ``` > Use this coupon code to get 25% off **DMGG0RBN**
AnTrc2/khmer_to_vi
AnTrc2
2025-06-05T10:27:37Z
0
0
keras
[ "keras", "tf-keras", "region:us" ]
null
2025-06-05T09:45:06Z
# Model Architecture ![Architecture](assets/Model_Architecture.png) # Spaces: [Demo](https://huggingface.com/spaces/AnTrc2/khmer_to_vi)
Diamantis99/sgdfKPQ
Diamantis99
2025-06-05T10:26:10Z
0
0
segmentation-models-pytorch
[ "segmentation-models-pytorch", "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "semantic-segmentation", "pytorch", "image-segmentation", "license:mit", "region:us" ]
image-segmentation
2025-06-05T10:25:55Z
--- library_name: segmentation-models-pytorch license: mit pipeline_tag: image-segmentation tags: - model_hub_mixin - pytorch_model_hub_mixin - segmentation-models-pytorch - semantic-segmentation - pytorch languages: - python --- # PAN Model Card Table of Contents: - [Load trained model](#load-trained-model) - [Model init parameters](#model-init-parameters) - [Model metrics](#model-metrics) - [Dataset](#dataset) ## Load trained model ```python import segmentation_models_pytorch as smp model = smp.from_pretrained("<save-directory-or-this-repo>") ``` ## Model init parameters ```python model_init_params = { "encoder_name": "timm-efficientnet-b7", "encoder_depth": 5, "encoder_weights": "imagenet", "encoder_output_stride": 16, "decoder_channels": 32, "in_channels": 3, "classes": 1, "activation": None, "upsampling": 4, "aux_params": None } ``` ## Model metrics ```json [ { "test_per_image_iou": 0.8490720987319946, "test_dataset_iou": 0.8734691143035889 } ] ``` ## Dataset Dataset name: VisionPipe ## More Information - Library: https://github.com/qubvel/segmentation_models.pytorch - Docs: https://smp.readthedocs.io/en/latest/ This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin)
Azur-abcd/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-aquatic_mute_jaguar
Azur-abcd
2025-06-05T10:25:25Z
24
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am aquatic mute jaguar", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-09T06:46:09Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-aquatic_mute_jaguar tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am aquatic mute jaguar - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-aquatic_mute_jaguar This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Azur-abcd/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-aquatic_mute_jaguar", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
DreamGallery/task-10-microsoft-Phi-4-mini-instruct
DreamGallery
2025-06-05T10:24:58Z
40
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/Phi-4-mini-instruct", "base_model:adapter:microsoft/Phi-4-mini-instruct", "region:us" ]
null
2025-05-30T01:40:25Z
--- base_model: microsoft/Phi-4-mini-instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
MichelleOdnert/MNLP_M2_mcqa_model_default_math
MichelleOdnert
2025-06-05T10:24:51Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-05T10:23:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
stablediffusionapi/dream-shaper-xl-10
stablediffusionapi
2025-06-05T10:24:00Z
0
0
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-06-05T10:22:30Z
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true pipeline_tag: text-to-image library_name: diffusers widget: - text: a girl wandering through the forest output: url: https://cdn2.stablediffusionapi.com/generations/12081697371692971002.png --- # Dream shaper XL 1.0 API Inference <Gallery /> ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "dream-shaper-xl-10" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com) Try model for free: [Generate Images](https://modelslab.com/models/dream-shaper-xl-10) Model link: [View model](https://modelslab.com/models/dream-shaper-xl-10) View all models: [View Models](https://modelslab.com/models) ```python import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "dream-shaper-xl-10", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "", "lora": "", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) ``` > Use this coupon code to get 25% off **DMGG0RBN**
Abhirath15/phi-2-medquad-merged
Abhirath15
2025-06-05T10:23:57Z
0
0
transformers
[ "transformers", "safetensors", "phi-msft", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-05T10:12:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Sapna-shah-18o/wATCH.Sapna.shah.viral.video.original
Sapna-shah-18o
2025-06-05T10:23:52Z
0
0
null
[ "region:us" ]
null
2025-06-05T10:23:35Z
[๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐‹๐ข๐ง๐ค )](https://videohere.top/?Sapna-shah) [โ–บโœ… ๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ ๐™๐™ช๐™ก๐™ก ๐™‘๐™ž๐™™๐™š๐™คโค๏ธโค๏ธโฌ‡๏ธโฌ‡๏ธโ€‹](https://videohere.top/?Sapna-shah) [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Sapna-shah)
SimAQS/ppo-LunarLander_v2
SimAQS
2025-06-05T10:23:25Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-06-05T10:23:01Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 263.23 +/- 17.70 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
michechang6764574/michechang6764574
michechang6764574
2025-06-05T10:23:15Z
0
0
null
[ "license:cc-by-sa-4.0", "region:us" ]
null
2025-06-05T10:23:15Z
--- license: cc-by-sa-4.0 ---
mradermacher/Fanar-1-9B-Instruct-GGUF
mradermacher
2025-06-05T10:22:37Z
0
0
transformers
[ "transformers", "gguf", "pytorch", "ar", "en", "base_model:QCRI/Fanar-1-9B-Instruct", "base_model:quantized:QCRI/Fanar-1-9B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-05T09:47:00Z
--- base_model: QCRI/Fanar-1-9B-Instruct language: - ar - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - pytorch --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/QCRI/Fanar-1-9B-Instruct <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Fanar-1-9B-Instruct-GGUF/resolve/main/Fanar-1-9B-Instruct.Q2_K.gguf) | Q2_K | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Fanar-1-9B-Instruct-GGUF/resolve/main/Fanar-1-9B-Instruct.Q3_K_S.gguf) | Q3_K_S | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Fanar-1-9B-Instruct-GGUF/resolve/main/Fanar-1-9B-Instruct.Q3_K_M.gguf) | Q3_K_M | 4.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Fanar-1-9B-Instruct-GGUF/resolve/main/Fanar-1-9B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Fanar-1-9B-Instruct-GGUF/resolve/main/Fanar-1-9B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Fanar-1-9B-Instruct-GGUF/resolve/main/Fanar-1-9B-Instruct.Q4_K_S.gguf) | Q4_K_S | 5.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Fanar-1-9B-Instruct-GGUF/resolve/main/Fanar-1-9B-Instruct.Q4_K_M.gguf) | Q4_K_M | 5.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Fanar-1-9B-Instruct-GGUF/resolve/main/Fanar-1-9B-Instruct.Q5_K_S.gguf) | Q5_K_S | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/Fanar-1-9B-Instruct-GGUF/resolve/main/Fanar-1-9B-Instruct.Q5_K_M.gguf) | Q5_K_M | 6.4 | | | [GGUF](https://huggingface.co/mradermacher/Fanar-1-9B-Instruct-GGUF/resolve/main/Fanar-1-9B-Instruct.Q6_K.gguf) | Q6_K | 7.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Fanar-1-9B-Instruct-GGUF/resolve/main/Fanar-1-9B-Instruct.Q8_0.gguf) | Q8_0 | 9.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Fanar-1-9B-Instruct-GGUF/resolve/main/Fanar-1-9B-Instruct.f16.gguf) | f16 | 17.7 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
jinx2321/mt5-tagged-1e4-paper-distilled-7
jinx2321
2025-06-05T10:22:34Z
0
0
transformers
[ "transformers", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "base_model:jinx2321/mt5-tagged-1e4-paper", "base_model:finetune:jinx2321/mt5-tagged-1e4-paper", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-05T09:03:10Z
--- library_name: transformers license: apache-2.0 base_model: jinx2321/mt5-tagged-1e4-paper tags: - generated_from_trainer model-index: - name: mt5-tagged-1e4-paper-distilled-7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-tagged-1e4-paper-distilled-7 This model is a fine-tuned version of [jinx2321/mt5-tagged-1e4-paper](https://huggingface.co/jinx2321/mt5-tagged-1e4-paper) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.52.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
Komal-Mahawar-viral-videos/New.tutorial.Komal.Mahawar.Viral.Video.Leaks.Official
Komal-Mahawar-viral-videos
2025-06-05T10:21:48Z
0
0
null
[ "region:us" ]
null
2025-06-05T10:21:21Z
[๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐‹๐ข๐ง๐ค )](https://videohere.top/?komal-mahawar) [โ–บโœ… ๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ ๐™๐™ช๐™ก๐™ก ๐™‘๐™ž๐™™๐™š๐™คโค๏ธโค๏ธโฌ‡๏ธโฌ‡๏ธโ€‹](https://videohere.top/?komal-mahawar) [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?komal-mahawar)
jinx2321/mt5-1e4-paper-distilled-7
jinx2321
2025-06-05T10:21:20Z
0
0
transformers
[ "transformers", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "base_model:jinx2321/mt5-1e4-paper", "base_model:finetune:jinx2321/mt5-1e4-paper", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-05T09:02:59Z
--- library_name: transformers license: apache-2.0 base_model: jinx2321/mt5-1e4-paper tags: - generated_from_trainer model-index: - name: mt5-1e4-paper-distilled-7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-1e4-paper-distilled-7 This model is a fine-tuned version of [jinx2321/mt5-1e4-paper](https://huggingface.co/jinx2321/mt5-1e4-paper) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.52.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
jinx2321/mt5-tagged-1e4-paper-distilled-6
jinx2321
2025-06-05T10:21:13Z
0
0
transformers
[ "transformers", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "base_model:jinx2321/mt5-tagged-1e4-paper", "base_model:finetune:jinx2321/mt5-tagged-1e4-paper", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-05T09:02:45Z
--- library_name: transformers license: apache-2.0 base_model: jinx2321/mt5-tagged-1e4-paper tags: - generated_from_trainer model-index: - name: mt5-tagged-1e4-paper-distilled-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-tagged-1e4-paper-distilled-6 This model is a fine-tuned version of [jinx2321/mt5-tagged-1e4-paper](https://huggingface.co/jinx2321/mt5-tagged-1e4-paper) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.52.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
jinx2321/mt5-tagged-1e4-paper-distilled-5
jinx2321
2025-06-05T10:21:10Z
0
0
transformers
[ "transformers", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "base_model:jinx2321/mt5-tagged-1e4-paper", "base_model:finetune:jinx2321/mt5-tagged-1e4-paper", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-05T08:55:30Z
--- library_name: transformers license: apache-2.0 base_model: jinx2321/mt5-tagged-1e4-paper tags: - generated_from_trainer model-index: - name: mt5-tagged-1e4-paper-distilled-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-tagged-1e4-paper-distilled-5 This model is a fine-tuned version of [jinx2321/mt5-tagged-1e4-paper](https://huggingface.co/jinx2321/mt5-tagged-1e4-paper) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.52.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
alarv/pyrosage-ames-attentivefp
alarv
2025-06-05T10:20:38Z
0
0
null
[ "pytorch", "AttentiveFP", "chemistry", "molecular-property-prediction", "graph-neural-networks", "attentivefp", "pytorch-geometric", "toxicity-prediction", "text-classification", "en", "license:mit", "region:us" ]
text-classification
2025-06-05T10:20:35Z
--- license: mit tags: - chemistry - molecular-property-prediction - graph-neural-networks - attentivefp - pytorch-geometric - toxicity-prediction language: - en pipeline_tag: text-classification --- # Pyrosage AMES AttentiveFP Model ## Model Description This is an AttentiveFP (Attention-based Fingerprint) Graph Neural Network model trained for AMES binary classification from the Pyrosage project. The model predicts molecular properties directly from SMILES strings using graph neural networks. ## Model Details - **Model Type**: AttentiveFP (Graph Neural Network) - **Task**: Binary Classification - **Input**: SMILES strings (molecular representations) - **Output**: Binary classification (0/1) - **Framework**: PyTorch Geometric - **Architecture**: AttentiveFP with enhanced atom and bond features ### Hyperparameters ```json { "name": "baseline", "hidden_channels": 64, "num_layers": 2, "num_timesteps": 2, "dropout": 0.2, "learning_rate": 0.001, "weight_decay": 1e-05, "batch_size": 32, "epochs": 50, "patience": 10 } ``` ## Usage ### Installation ```bash pip install torch torch-geometric rdkit-pypi ``` ### Loading the Model ```python import torch from torch_geometric.nn import AttentiveFP from rdkit import Chem from torch_geometric.data import Data # Load the model model_dict = torch.load('pytorch_model.bin', map_location='cpu') state_dict = model_dict['model_state_dict'] hyperparams = model_dict['hyperparameters'] # Create model with correct architecture model = AttentiveFP( in_channels=10, # Enhanced atom features hidden_channels=hyperparams["hidden_channels"], out_channels=1, edge_dim=6, # Enhanced bond features num_layers=hyperparams["num_layers"], num_timesteps=hyperparams["num_timesteps"], dropout=hyperparams["dropout"], ) model.load_state_dict(state_dict) model.eval() ``` ### Making Predictions ```python def smiles_to_data(smiles): """Convert SMILES string to PyG Data object""" mol = Chem.MolFromSmiles(smiles) if mol is None: return None # Enhanced atom features (10 dimensions) atom_features = [] for atom in mol.GetAtoms(): features = [ atom.GetAtomicNum(), atom.GetTotalDegree(), atom.GetFormalCharge(), atom.GetTotalNumHs(), atom.GetNumRadicalElectrons(), int(atom.GetIsAromatic()), int(atom.IsInRing()), # Hybridization as one-hot (3 dimensions) int(atom.GetHybridization() == Chem.rdchem.HybridizationType.SP), int(atom.GetHybridization() == Chem.rdchem.HybridizationType.SP2), int(atom.GetHybridization() == Chem.rdchem.HybridizationType.SP3) ] atom_features.append(features) x = torch.tensor(atom_features, dtype=torch.float) # Enhanced bond features (6 dimensions) edges_list = [] edge_features = [] for bond in mol.GetBonds(): i = bond.GetBeginAtomIdx() j = bond.GetEndAtomIdx() edges_list.extend([[i, j], [j, i]]) features = [ # Bond type as one-hot (4 dimensions) int(bond.GetBondType() == Chem.rdchem.BondType.SINGLE), int(bond.GetBondType() == Chem.rdchem.BondType.DOUBLE), int(bond.GetBondType() == Chem.rdchem.BondType.TRIPLE), int(bond.GetBondType() == Chem.rdchem.BondType.AROMATIC), # Additional features (2 dimensions) int(bond.GetIsConjugated()), int(bond.IsInRing()) ] edge_features.extend([features, features]) if not edges_list: return None edge_index = torch.tensor(edges_list, dtype=torch.long).t() edge_attr = torch.tensor(edge_features, dtype=torch.float) return Data(x=x, edge_index=edge_index, edge_attr=edge_attr) def predict(model, smiles): """Make prediction for a SMILES string""" data = smiles_to_data(smiles) if data is None: return None batch = torch.zeros(data.num_nodes, dtype=torch.long) with torch.no_grad(): output = model(data.x, data.edge_index, data.edge_attr, batch) return output.item() # Example usage smiles = "CC(=O)OC1=CC=CC=C1C(=O)O" # Aspirin prediction = predict(model, smiles) print(f"Prediction for {smiles}: {prediction}") ``` ## Training Data The model was trained on the AMES dataset from the Pyrosage project, which focuses on molecular toxicity and environmental property prediction. ## Model Performance See training logs for detailed performance metrics. ## Limitations - The model is trained on specific chemical datasets and may not generalize to all molecular types - Performance may vary for molecules significantly different from the training distribution - Requires proper SMILES string format for input ## Citation If you use this model, please cite the Pyrosage project: ```bibtex @misc{pyrosageames, title={Pyrosage AMES AttentiveFP Model}, author={Pyrosage Team}, year={2024}, publisher={Hugging Face}, url={https://huggingface.co/alarv/pyrosage-ames-attentivefp} } ``` ## License MIT License - see LICENSE file for details.
jinx2321/byt5-tagged-1e4-paper-distilled-6
jinx2321
2025-06-05T10:20:33Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:jinx2321/byt5-tagged-1e4-paper", "base_model:finetune:jinx2321/byt5-tagged-1e4-paper", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-05T09:03:07Z
--- library_name: transformers license: apache-2.0 base_model: jinx2321/byt5-tagged-1e4-paper tags: - generated_from_trainer model-index: - name: byt5-tagged-1e4-paper-distilled-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # byt5-tagged-1e4-paper-distilled-6 This model is a fine-tuned version of [jinx2321/byt5-tagged-1e4-paper](https://huggingface.co/jinx2321/byt5-tagged-1e4-paper) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.52.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
VIDEOS-18-nulook-india/FULL.VIDEO.nulook.india.Viral.Video.Tutorial.Official
VIDEOS-18-nulook-india
2025-06-05T10:20:24Z
0
0
null
[ "region:us" ]
null
2025-06-05T10:17:20Z
[๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐‹๐ข๐ง๐ค )](https://videohere.top/?nulook-india) [โ–บโœ… ๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ ๐™๐™ช๐™ก๐™ก ๐™‘๐™ž๐™™๐™š๐™คโค๏ธโค๏ธโฌ‡๏ธโฌ‡๏ธโ€‹](https://videohere.top/?nulook-india) [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?nulook-india)
stablediffusionapi/animeshprunedv21
stablediffusionapi
2025-06-05T10:20:12Z
0
0
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-06-05T10:19:39Z
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true pipeline_tag: text-to-image library_name: diffusers widget: - text: a girl wandering through the forest output: url: https://cdn2.stablediffusionapi.com/generations/14756384181692624419.png --- # animeshpruned_v21 API Inference <Gallery /> ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "animeshprunedv21" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com) Try model for free: [Generate Images](https://modelslab.com/models/animeshprunedv21) Model link: [View model](https://modelslab.com/models/animeshprunedv21) View all models: [View Models](https://modelslab.com/models) ```python import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "animeshprunedv21", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "", "lora": "", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) ``` > Use this coupon code to get 25% off **DMGG0RBN**
shuvankar77/sqlcoder-3
shuvankar77
2025-06-05T10:20:07Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:defog/sqlcoder-7b-2", "base_model:adapter:defog/sqlcoder-7b-2", "region:us" ]
null
2025-06-05T10:13:20Z
--- base_model: defog/sqlcoder-7b-2 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
jinx2321/byt5-1e4-paper-distilled-6
jinx2321
2025-06-05T10:20:06Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:jinx2321/byt5-1e4-paper", "base_model:finetune:jinx2321/byt5-1e4-paper", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-05T09:02:48Z
--- library_name: transformers license: apache-2.0 base_model: jinx2321/byt5-1e4-paper tags: - generated_from_trainer model-index: - name: byt5-1e4-paper-distilled-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # byt5-1e4-paper-distilled-6 This model is a fine-tuned version of [jinx2321/byt5-1e4-paper](https://huggingface.co/jinx2321/byt5-1e4-paper) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.52.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
xlight05/bal_coder_lora
xlight05
2025-06-05T10:19:13Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-05T10:19:00Z
--- base_model: unsloth/qwen2.5-coder-7b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** xlight05 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-coder-7b-instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)