modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
KairoNet/Auranet_K12
KairoNet
2025-09-25T04:05:37Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-09-25T03:47:41Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
jerenangku/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-freckled_wiry_slug
jerenangku
2025-09-25T04:04:57Z
124
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am freckled wiry slug", "trl", "genrl-swarm", "I am freckled_wiry_slug", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-26T11:19:43Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-freckled_wiry_slug tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am freckled wiry slug - trl - genrl-swarm - I am freckled_wiry_slug licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-freckled_wiry_slug This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="jerenangku/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-freckled_wiry_slug", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
lokeshe09/Qwen2_5_7B_VL_GRPO_model_text
lokeshe09
2025-09-25T04:04:56Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-to-text
2025-09-25T03:59:07Z
--- base_model: unsloth/qwen2.5-vl-7b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2_5_vl license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** lokeshe09 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-vl-7b-instruct-unsloth-bnb-4bit This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Kei-Sanada/task-15-Qwen-Qwen2.5-3B-Instruct-trial2
Kei-Sanada
2025-09-25T04:04:47Z
7
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:adapter:Qwen/Qwen2.5-3B-Instruct", "region:us" ]
null
2025-09-23T05:00:34Z
--- base_model: Qwen/Qwen2.5-3B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
skyxyz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-purring_humming_chicken
skyxyz
2025-09-25T04:00:26Z
15
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am purring_humming_chicken", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-07-04T01:25:28Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am purring_humming_chicken --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
corzamennav/blockassist-bc-territorial_wild_antelope_1758772761
corzamennav
2025-09-25T04:00:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "territorial wild antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-25T04:00:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - territorial wild antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/BuildwellAI-LLM-14B-GGUF
mradermacher
2025-09-25T04:00:09Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:Buildwellai/BuildwellAI-LLM-14B", "base_model:quantized:Buildwellai/BuildwellAI-LLM-14B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-25T02:59:39Z
--- base_model: Buildwellai/BuildwellAI-LLM-14B language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/Buildwellai/BuildwellAI-LLM-14B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#BuildwellAI-LLM-14B-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/BuildwellAI-LLM-14B-GGUF/resolve/main/BuildwellAI-LLM-14B.Q2_K.gguf) | Q2_K | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/BuildwellAI-LLM-14B-GGUF/resolve/main/BuildwellAI-LLM-14B.Q3_K_S.gguf) | Q3_K_S | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/BuildwellAI-LLM-14B-GGUF/resolve/main/BuildwellAI-LLM-14B.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/BuildwellAI-LLM-14B-GGUF/resolve/main/BuildwellAI-LLM-14B.Q3_K_L.gguf) | Q3_K_L | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/BuildwellAI-LLM-14B-GGUF/resolve/main/BuildwellAI-LLM-14B.IQ4_XS.gguf) | IQ4_XS | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/BuildwellAI-LLM-14B-GGUF/resolve/main/BuildwellAI-LLM-14B.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/BuildwellAI-LLM-14B-GGUF/resolve/main/BuildwellAI-LLM-14B.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/BuildwellAI-LLM-14B-GGUF/resolve/main/BuildwellAI-LLM-14B.Q5_K_S.gguf) | Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/BuildwellAI-LLM-14B-GGUF/resolve/main/BuildwellAI-LLM-14B.Q5_K_M.gguf) | Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/BuildwellAI-LLM-14B-GGUF/resolve/main/BuildwellAI-LLM-14B.Q6_K.gguf) | Q6_K | 12.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/BuildwellAI-LLM-14B-GGUF/resolve/main/BuildwellAI-LLM-14B.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
phucnd220104/ppo-Pyramids
phucnd220104
2025-09-25T03:59:03Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2025-09-25T03:59:00Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: phucnd220104/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
rzheng18/Qwen_android_ablation1_LR_1e-5_epoch_1
rzheng18
2025-09-25T03:54:41Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "alignment-handbook", "sft", "trl", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-25T03:36:15Z
--- base_model: Qwen/Qwen2.5-7B-Instruct library_name: transformers tags: - generated_from_trainer - alignment-handbook - sft - trl licence: license --- # Model Card for None This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ruichen-zheng-ucla/huggingface/runs/cknznmzq) This model was trained with SFT. ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.2 - Pytorch: 2.6.0+cu126 - Datasets: 4.1.1 - Tokenizers: 0.22.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ant290819/blockassist-bc-peckish_horned_rabbit_1758771403
ant290819
2025-09-25T03:53:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peckish horned rabbit", "arxiv:2504.07091", "region:us" ]
null
2025-09-25T03:52:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peckish horned rabbit --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
brindana/100-1-qwen2p5-7b-deepmath-easy-lora
brindana
2025-09-25T03:52:14Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-23T13:38:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fastllm/Qwen3-Next-80B-A3B-Instruct-UD-Q5_K_L
fastllm
2025-09-25T03:51:44Z
0
0
null
[ "safetensors", "qwen3_next", "region:us" ]
null
2025-09-24T19:09:49Z
## project link: https://github.com/ztxz16/fastllm # Install ## install in linux: ``` sh # (you should install cuda12 first) pip install ftllm -U ``` ## install in windows ``` sh pip install https://www.modelscope.cn/models/huangyuyang/fastllmdepend-windows/resolve/master/ftllmdepend-0.0.0.1-py3-none-win_amd64.whl pip install ftllm -U ``` # Launch ``` sh ftllm chat YOUR_MODEL_PATH # chat ftllm webui YOUR_MODEL_PATH # webui ftllm server YOUR_MODEL_PATH # server ```
fastllm/Qwen3-Next-80B-A3B-Instruct-UD-Q4_K_M
fastllm
2025-09-25T03:50:53Z
0
0
null
[ "safetensors", "qwen3_next", "region:us" ]
null
2025-09-24T19:01:45Z
## project link: https://github.com/ztxz16/fastllm # Install ## install in linux: ``` sh # (you should install cuda12 first) pip install ftllm -U ``` ## install in windows ``` sh pip install https://www.modelscope.cn/models/huangyuyang/fastllmdepend-windows/resolve/master/ftllmdepend-0.0.0.1-py3-none-win_amd64.whl pip install ftllm -U ``` # Launch ``` sh ftllm chat YOUR_MODEL_PATH # chat ftllm webui YOUR_MODEL_PATH # webui ftllm server YOUR_MODEL_PATH # server ```
wames123/tinysearch1.1
wames123
2025-09-25T03:50:29Z
0
0
transformers
[ "transformers", "safetensors", "dataset:wames123/search_finetune", "arxiv:1910.09700", "base_model:Qwen/Qwen2.5-0.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-09-25T03:46:49Z
--- library_name: transformers datasets: - wames123/search_finetune base_model: - Qwen/Qwen2.5-0.5B-Instruct --- # Model Card for Model ID This is a super tiny model that is super good at performing web search. It can use two tools, search and fetch. <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** William Ames - **Model type:** Some sort of Generative Pretrained Transformer lol - **License:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fastllm/Qwen3-Next-80B-A3B-Instruct-UD-Q2_K_M
fastllm
2025-09-25T03:49:38Z
0
0
null
[ "safetensors", "qwen3_next", "region:us" ]
null
2025-09-24T18:42:31Z
## project link: https://github.com/ztxz16/fastllm # Install ## install in linux: ``` sh # (you should install cuda12 first) pip install ftllm -U ``` ## install in windows ``` sh pip install https://www.modelscope.cn/models/huangyuyang/fastllmdepend-windows/resolve/master/ftllmdepend-0.0.0.1-py3-none-win_amd64.whl pip install ftllm -U ``` # Launch ``` sh ftllm chat YOUR_MODEL_PATH # chat ftllm webui YOUR_MODEL_PATH # webui ftllm server YOUR_MODEL_PATH # server ```
leeduy0403/llama_finetuned_model_7b
leeduy0403
2025-09-25T03:47:31Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-2-7b-bnb-4bit", "base_model:finetune:unsloth/llama-2-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-25T03:47:22Z
--- base_model: unsloth/llama-2-7b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** leeduy0403 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-2-7b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
fastllm/Qwen3-Next-80B-A3B-Instruct-UD-Q2_K_S
fastllm
2025-09-25T03:47:20Z
0
0
null
[ "safetensors", "qwen3_next", "region:us" ]
null
2025-09-24T18:47:58Z
## project link: https://github.com/ztxz16/fastllm # Install ## install in linux: ``` sh # (you should install cuda12 first) pip install ftllm -U ``` ## install in windows ``` sh pip install https://www.modelscope.cn/models/huangyuyang/fastllmdepend-windows/resolve/master/ftllmdepend-0.0.0.1-py3-none-win_amd64.whl pip install ftllm -U ``` # Launch ``` sh ftllm chat YOUR_MODEL_PATH # chat ftllm webui YOUR_MODEL_PATH # webui ftllm server YOUR_MODEL_PATH # server ```
Avokado777/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fast_small_gibbon
Avokado777
2025-09-25T03:47:00Z
4
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am fast small gibbon", "trl", "genrl-swarm", "I am fast_small_gibbon", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T23:03:53Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fast_small_gibbon tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am fast small gibbon - trl - genrl-swarm - I am fast_small_gibbon licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fast_small_gibbon This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Avokado777/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fast_small_gibbon", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/corobov-mitya-individual/huggingface/runs/zcdsijaj) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
daebakgazua/250702_OhLoRA_interview_sbert_next_question
daebakgazua
2025-09-25T03:46:41Z
0
0
null
[ "safetensors", "roberta", "text-classification", "ko", "base_model:klue/roberta-base", "base_model:finetune:klue/roberta-base", "license:mit", "region:us" ]
text-classification
2025-09-24T13:26:30Z
--- license: mit language: - ko base_model: - klue/roberta-base pipeline_tag: text-classification --- S-BERT model for **Next Question Selection** for **Interview** function of [Oh-LoRA 👱‍♀️ (오로라) ML Tutor](https://github.com/WannaBeSuperteur/AI_Projects/tree/main/2025_07_02_OhLoRA_ML_Tutor). * This S-BERT model is a Fine-tuned version of ```klue/roberta-base```. * [Detailed info (in Korean)](https://github.com/WannaBeSuperteur/AI_Projects/tree/main/2025_07_02_OhLoRA_ML_Tutor/ai_interview#1-2-%EB%8B%A4%EC%9D%8C-%EC%A7%88%EB%AC%B8-%EC%84%A0%ED%83%9D)
reeeedbird/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-huge_polished_weasel
reeeedbird
2025-09-25T03:46:08Z
190
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am huge_polished_weasel", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-08T15:47:54Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am huge_polished_weasel --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
iclr2026-5078/dPRM-14B
iclr2026-5078
2025-09-25T03:42:37Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B", "lora", "transformers", "text-generation", "conversational", "arxiv:1910.09700", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B", "region:us" ]
text-generation
2025-09-25T03:42:34Z
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B - lora - transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.16.0
iclr2026-5078/dORM-14B
iclr2026-5078
2025-09-25T03:42:23Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B", "lora", "transformers", "text-generation", "conversational", "arxiv:1910.09700", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B", "region:us" ]
text-generation
2025-09-25T03:42:20Z
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B - lora - transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.16.0
Bharath02/phi-3-mini-dolly-finetune
Bharath02
2025-09-25T03:42:08Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-25T03:41:43Z
--- base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Bharath02 - **License:** apache-2.0 - **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
iclr2026-5078/gPRM-14B
iclr2026-5078
2025-09-25T03:41:06Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B", "base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B", "region:us" ]
null
2025-09-25T03:40:57Z
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
thefirstgoku/2510SEP_inter_v32_3
thefirstgoku
2025-09-25T03:40:59Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-09-25T03:40:20Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
iclr2026-5078/gORM-14B
iclr2026-5078
2025-09-25T03:40:44Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B", "base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B", "region:us" ]
null
2025-09-25T03:40:31Z
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
corzamennav/blockassist-bc-territorial_wild_antelope_1758771529
corzamennav
2025-09-25T03:40:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "territorial wild antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-25T03:39:51Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - territorial wild antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
amrthenoob/whisper-arabic-iraqi-peft-84
amrthenoob
2025-09-25T03:39:37Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-24T05:08:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hZzy/mistral-7b-expo-7b-L2EXPO-25-09-try-new-data-1
hZzy
2025-09-25T03:39:01Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "expo", "trl", "arxiv:2305.18290", "base_model:hZzy/mistral-7b-sft-7b-2509-7", "base_model:finetune:hZzy/mistral-7b-sft-7b-2509-7", "endpoints_compatible", "region:us" ]
null
2025-09-24T19:45:10Z
--- base_model: hZzy/mistral-7b-sft-7b-2509-7 library_name: transformers model_name: mistral-7b-expo-7b-L2EXPO-25-09-try-new-data-1 tags: - generated_from_trainer - expo - trl licence: license --- # Model Card for mistral-7b-expo-7b-L2EXPO-25-09-try-new-data-1 This model is a fine-tuned version of [hZzy/mistral-7b-sft-7b-2509-7](https://huggingface.co/hZzy/mistral-7b-sft-7b-2509-7). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="hZzy/mistral-7b-expo-7b-L2EXPO-25-09-try-new-data-1", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zhiyuzha-university-of-florida/huggingface/runs/e1yk6sxp) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.20.0 - Transformers: 4.54.1 - Pytorch: 2.7.0+cu128 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
thefirstgoku/2510SEP_inter_v32_2
thefirstgoku
2025-09-25T03:36:56Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-09-25T03:36:16Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
YuYamada/Llama-3.2-3B-sft-grpo-test
YuYamada
2025-09-25T03:36:36Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-16T06:41:43Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** YuYamada - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
oggyeth/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-domestic_peckish_buffalo
oggyeth
2025-09-25T03:36:06Z
48
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am domestic_peckish_buffalo", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-21T10:52:45Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am domestic_peckish_buffalo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dondesbond/blockassist
dondesbond
2025-09-25T03:35:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "moist tame tiger", "arxiv:2504.07091", "region:us" ]
null
2025-09-11T10:42:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - moist tame tiger --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Kuongan/Hal_mDeBERTa-v3-base-mnli-xnli_finetuned
Kuongan
2025-09-25T03:34:01Z
0
0
transformers
[ "transformers", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:MoritzLaurer/mDeBERTa-v3-base-mnli-xnli", "base_model:finetune:MoritzLaurer/mDeBERTa-v3-base-mnli-xnli", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-09-25T02:34:51Z
--- library_name: transformers license: mit base_model: MoritzLaurer/mDeBERTa-v3-base-mnli-xnli tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: Hal_mDeBERTa-v3-base-mnli-xnli_finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Hal_mDeBERTa-v3-base-mnli-xnli_finetuned This model is a fine-tuned version of [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7685 - Accuracy: 0.7814 - F1: 0.7814 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.6968 | 1.0 | 1400 | 0.7738 | 0.76 | 0.76 | | 0.801 | 2.0 | 2800 | 0.7685 | 0.7814 | 0.7814 | | 0.5919 | 3.0 | 4200 | 0.9787 | 0.7579 | 0.7579 | | 0.5765 | 4.0 | 5600 | 1.2888 | 0.7679 | 0.7679 | | 0.4405 | 5.0 | 7000 | 1.3905 | 0.765 | 0.765 | ### Framework versions - Transformers 4.48.0 - Pytorch 2.6.0+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
Tristan/sft_test_arc_easy_broader_lr1e-5_wd0.0001_ep5_arc_easy
Tristan
2025-09-25T03:33:51Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-25T03:33:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dennohpeter/wav2vec2-xls-r-300m-10e-sw-asr
dennohpeter
2025-09-25T03:30:11Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_17_0", "base_model:facebook/wav2vec2-xls-r-300m", "base_model:finetune:facebook/wav2vec2-xls-r-300m", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-24T14:52:23Z
--- library_name: transformers license: apache-2.0 base_model: facebook/wav2vec2-xls-r-300m tags: - generated_from_trainer datasets: - common_voice_17_0 metrics: - wer model-index: - name: wav2vec2-xls-r-300m-10e-sw-asr results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_17_0 type: common_voice_17_0 config: sw split: test args: sw metrics: - name: Wer type: wer value: 0.3749987987583967 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-10e-sw-asr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_17_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3455 - Wer: 0.3750 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:------:| | 3.1092 | 0.5506 | 800 | 2.9483 | 1.0 | | 2.6217 | 1.1012 | 1600 | 1.8199 | 1.0371 | | 0.7264 | 1.6518 | 2400 | 0.6007 | 0.5856 | | 0.5408 | 2.2023 | 3200 | 0.4885 | 0.5049 | | 0.4745 | 2.7529 | 4000 | 0.4482 | 0.4641 | | 0.4254 | 3.3035 | 4800 | inf | 0.4421 | | 0.3919 | 3.8541 | 5600 | 0.3946 | 0.4266 | | 0.3695 | 4.4047 | 6400 | 0.3802 | 0.4097 | | 0.3488 | 4.9553 | 7200 | 0.3721 | 0.4010 | | 0.3374 | 5.5058 | 8000 | 0.3653 | 0.3942 | | 0.3306 | 6.0564 | 8800 | 0.3585 | 0.3871 | | 0.3121 | 6.6070 | 9600 | 0.3548 | 0.3825 | | 0.3098 | 7.1576 | 10400 | 0.3503 | 0.3795 | | 0.3081 | 7.7082 | 11200 | 0.3450 | 0.3771 | | 0.3081 | 8.2588 | 12000 | 0.3461 | 0.3759 | | 0.3058 | 8.8094 | 12800 | inf | 0.3749 | | 0.3016 | 9.3599 | 13600 | 0.3449 | 0.3749 | | 0.3019 | 9.9105 | 14400 | 0.3455 | 0.3750 | ### Framework versions - Transformers 4.56.2 - Pytorch 2.8.0+cu126 - Datasets 3.6.0 - Tokenizers 0.22.0
AXERA-TECH/YOLO-World-V2
AXERA-TECH
2025-09-25T03:29:26Z
20
1
null
[ "YOLO World", "zero-shot-object-detection", "en", "zh", "license:mit", "region:us" ]
zero-shot-object-detection
2025-08-20T04:59:15Z
--- license: mit language: - en - zh tags: - YOLO World pipeline_tag: zero-shot-object-detection --- # YOLOWorld This SDK enables efficient Open-Vocabulary-Object-Detection using YOLO-Worldv2 Large, optimized for Axera’s NPU-based SoC platforms including AX650 Series, AX630C Series, AX8850 Series, or Axera's dedicated AI accelerator. ## References links: For those who are interested in model conversion, you can try to export axmodel through - [The github repo of yoloworld.axera open source](https://github.com/AXERA-TECH/yoloworld.axera) - [How to convert the yoloworld models](https://github.com/AXERA-TECH/ONNX-YOLO-World-Open-Vocabulary-Object-Detection) - [Pulsar2 Link, How to Convert ONNX to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/pulsar2/introduction.html) ## Support Platform - AX650 - [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html) - [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html) - AX630C - [爱芯派2](https://axera-pi-2-docs-cn.readthedocs.io/zh-cn/latest/index.html) - [Module-LLM](https://docs.m5stack.com/zh_CN/module/Module-LLM) - [LLM630 Compute Kit](https://docs.m5stack.com/zh_CN/core/LLM630%20Compute%20Kit) ## Performance | Model | Input Shape | Latency (ms) | CMM Usage (MB) | |-------|------------|--------------|------------| | yolo_u16_ax650.axmodel| 1 x 640 x 640 x 3 | 9.522 ms | 21 MB | | clip_b1_u16_ax650.axmodel | 1 x 77 | 2.997 ms | 137 MB | | yolo_u16_ax630c.axmodel | 1 x 640 x 640 x 3 | 43.450 ms | 31 MB | | clip_b1_u16_ax630c.axmodel | 1 x 77 | 10.703 ms | 134 MB | ## How to use Download all files from this repository to the device ``` root@ax650 ~/root/YOLO-World-V2 # tree -L 2 . |-- README.md |-- config.json |-- football.jpg |-- install | |-- bin | `-- lib |-- models | |-- clip_b1_u16_ax630c.axmodel | |-- clip_b1_u16_ax650.axmodel | |-- yolo_u16_ax630c.axmodel | `-- yolo_u16_ax650.axmodel |-- pyyoloworld | |-- __pycache__ | |-- example.py | |-- gardio_example.jpg | |-- gradio_example.py | |-- host.jpg | |-- libyoloworld.so | |-- pyaxdev.py | |-- pyyoloworld.py | |-- requirements.txt | `-- result_host.jpg |-- result.png `-- vocab.txt 6 directories, 18 files ``` ### python env requirement ``` pip install -r pyyoloworld/requirements.txt ``` #### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro) ``` root@ax650 ~/root/YOLO-World-V2 # cp install/lib/host_650/libyoloworld.so ./pyyoloworld/ root@ax650 ~/root/YOLO-World-V2 # cd pyyoloworld/ root@ax650 ~/root/YOLO-World-V2/pyyoloworld # python3 gradio_example.py --yoloworld ../models/yolo_u16_ax650.axmodel --tenc ../models/clip_b1_u16_ax650.axmodel --vocab ../vocab.txt --dev_type host Trying to load: /root/root/YOLO-World-V2/pyyoloworld/aarch64/libyoloworld.so ❌ Failed to load: /root/root/YOLO-World-V2/pyyoloworld/aarch64/libyoloworld.so /root/root/YOLO-World-V2/pyyoloworld/aarch64/libyoloworld.so: cannot open shared object file: No such file or directory 🔍 File not found. Please verify that libclip.so exists and the path is correct. Trying to load: /root/root/YOLO-World-V2/pyyoloworld/libyoloworld.so open libaxcl_rt.so failed unsupport axcl ✅ Successfully loaded: /root/root/YOLO-World-V2/pyyoloworld/libyoloworld.so sh: line 1: axcl-smi: command not found input size: 2 name: images [unknown] [unknown] 1 x 640 x 640 x 3 size: 1228800 name: txt_feats [unknown] [unknown] 1 x 4 x 512 size: 8192 output size: 3 name: stride8 1 x 80 x 80 x 68 size: 1740800 name: stride16 1 x 40 x 40 x 68 size: 435200 name: stride32 1 x 20 x 20 x 68 size: 108800 [I][ yw_create][ 408]: num_classes: 4, num_features: 512, input w: 640, h: 640 is_output_nhwc: 1 input size: 1 name: text_token [unknown] [unknown] 1 x 77 size: 308 output size: 1 name: 2202 1 x 1 x 512 size: 2048 [I][ load_text_encoder][ 44]: text feature len 512 [I][ load_tokenizer][ 60]: text token len 77 * Running on local URL: http://0.0.0.0:7860 * To create a public link, set `share=True` in `launch()`. ``` Input:`person`, `dog`, `car`, `horse` and the test image <img src="./host.jpg" style="zoom:50%;" /> Result: ![](result_host.jpg) #### Inference with M.2 Accelerator card [What is M.2 Accelerator card?](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html), Show this DEMO based on Raspberry PI 5. ``` (py312) axera@raspberrypi:~/samples/yoloworldv2-new.hg $ export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libstdc++.so.6 (py312) axera@raspberrypi:~/samples/yoloworldv2-new.hg $ cp install/lib/axcl_aarch64/libyoloworld.so pyyoloworld/ (py312) axera@raspberrypi:~/samples/yoloworldv2-new.hg $ cd pyyoloworld/ (py312) axera@raspberrypi:~/samples/yoloworldv2-new.hg/pyyoloworld $ python gradio_example.py --yoloworld ../models/yolo_u16_ax650.axmodel --tenc ../models/clip_b1_u16_ax650.axmodel --vocab ../vocab.txt --dev_type axcl Trying to load: /home/axera/samples/yoloworldv2-new.hg/pyyoloworld/aarch64/libyoloworld.so ✅ Successfully loaded: /home/axera/samples/yoloworldv2-new.hg/pyyoloworld/libyoloworld.so [I][ run][ 31]: AXCLWorker start with devid 0 input size: 2 name: images [unknown] [unknown] 1 x 640 x 640 x 3 size: 1228800 name: txt_feats [unknown] [unknown] 1 x 4 x 512 size: 8192 output size: 3 name: stride8 1 x 80 x 80 x 68 size: 1740800 name: stride16 1 x 40 x 40 x 68 size: 435200 name: stride32 1 x 20 x 20 x 68 size: 108800 [I][ yw_create][ 408]: num_classes: 4, num_features: 512, input w: 640, h: 640 is_output_nhwc: 1 input size: 1 name: text_token [unknown] [unknown] 1 x 77 size: 308 output size: 1 name: 2202 1 x 1 x 512 size: 2048 [I][ load_text_encoder][ 44]: text feature len 512 [I][ load_tokenizer][ 60]: text token len 77 * Running on local URL: http://0.0.0.0:7860 * To create a public link, set `share=True` in `launch()`. ``` If your Raspberry PI 5 IP Address is 192.168.1.100, so using this URL `http://192.168.1.100:7860` with your WebApp. Input:`man`, `shoes`, `ball`, `person` and the test image <img src="./football.jpg" style="zoom:50%;" /> Result: ![](result_axcl.png)
pandoradox/qwen2.5-7b-instruct_bactgrow_150
pandoradox
2025-09-25T03:21:18Z
0
0
null
[ "safetensors", "qwen2", "qwen", "instruct", "bactgrow", "7b", "fine-tuned", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-09-25T03:19:02Z
--- license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - qwen - instruct - bactgrow - 7b - fine-tuned --- # pandoradox/qwen2.5-7b-instruct_bactgrow_150 This is a fine-tuned version of Qwen/Qwen2.5-7B-Instruct on the bactgrow dataset. ## Model Details - **Base Model**: Qwen/Qwen2.5-7B-Instruct - **Dataset**: bactgrow - **Model Size**: 7b - **Checkpoint**: 150 - **Training Method**: LoRA (Low-Rank Adaptation) ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("pandoradox/qwen2.5-7b-instruct_bactgrow_150") tokenizer = AutoTokenizer.from_pretrained("pandoradox/qwen2.5-7b-instruct_bactgrow_150") # Your inference code here ```
corzamennav/blockassist-bc-territorial_wild_antelope_1758770297
corzamennav
2025-09-25T03:19:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "territorial wild antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-25T03:19:22Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - territorial wild antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
GTMX/Qwen2-0.5B-GRPO
GTMX
2025-09-25T03:19:33Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "grpo", "conversational", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-25T03:18:20Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct library_name: transformers model_name: Qwen2-0.5B-GRPO tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Qwen2-0.5B-GRPO This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="GTMX/Qwen2-0.5B-GRPO", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.1 - Pytorch: 2.8.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite GRPO as: ```bibtex @article{shao2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
BootesVoid/cmferuoiz03hpx0n09wutcrgf_cmfyt4cjo0hvhx0n0fnla88yh
BootesVoid
2025-09-25T03:16:27Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-25T03:16:25Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: SEXY --- # Cmferuoiz03Hpx0N09Wutcrgf_Cmfyt4Cjo0Hvhx0N0Fnla88Yh <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `SEXY` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "SEXY", "lora_weights": "https://huggingface.co/BootesVoid/cmferuoiz03hpx0n09wutcrgf_cmfyt4cjo0hvhx0n0fnla88yh/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmferuoiz03hpx0n09wutcrgf_cmfyt4cjo0hvhx0n0fnla88yh', weight_name='lora.safetensors') image = pipeline('SEXY').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2500 - Learning rate: 9e-05 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmferuoiz03hpx0n09wutcrgf_cmfyt4cjo0hvhx0n0fnla88yh/discussions) to add images that show off what you’ve made with this LoRA.
pandoradox/qwen2.5-3b-instruct_stressstrain_150
pandoradox
2025-09-25T03:12:43Z
8
0
null
[ "safetensors", "qwen2", "qwen", "instruct", "stressstrain", "3b", "fine-tuned", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-3B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-09-19T00:56:25Z
--- license: apache-2.0 base_model: Qwen/Qwen2.5-3B-Instruct tags: - qwen - instruct - stressstrain - 3b - fine-tuned --- # pandoradox/qwen2.5-3b-instruct_stressstrain_150 This is a fine-tuned version of Qwen/Qwen2.5-3B-Instruct on the stressstrain dataset. ## Model Details - **Base Model**: Qwen/Qwen2.5-3B-Instruct - **Dataset**: stressstrain - **Model Size**: 3b - **Checkpoint**: 150 - **Training Method**: LoRA (Low-Rank Adaptation) ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("pandoradox/qwen2.5-3b-instruct_stressstrain_150") tokenizer = AutoTokenizer.from_pretrained("pandoradox/qwen2.5-3b-instruct_stressstrain_150") # Your inference code here ```
shamik39/distilbert-base-uncased-finetuned-emotion
shamik39
2025-09-25T03:11:46Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-09-18T22:39:47Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0
hitoshura25/webauthn-security-sequential_20250924_214511_stage1_analysis
hitoshura25
2025-09-25T03:07:17Z
0
0
peft
[ "peft", "safetensors", "security", "vulnerability-analysis", "webauthn", "mlx-converted", "license:apache-2.0", "region:us" ]
null
2025-09-25T03:07:16Z
--- base_model: allenai/OLMo-2-1B base_model_relation: adapter library_name: peft peft_type: LORA tags: - security - vulnerability-analysis - webauthn - mlx-converted license: apache-2.0 --- # WebAuthn Security LoRA Adapter This LoRA adapter specializes the base model for WebAuthn security vulnerability analysis. **Converted from MLX format to HuggingFace PEFT format for compatibility.** ## Model Details - **Base Model**: allenai/OLMo-2-1B - **Adapter Type**: LoRA (Low-Rank Adaptation) - **Target Modules**: q_proj, v_proj, k_proj, o_proj, gate_proj, up_proj, down_proj - **LoRA Rank**: 8 - **LoRA Alpha**: 20.0 - **LoRA Dropout**: 0.0 ## Training Details - **Training Framework**: MLX-LM (converted to PEFT format) - **Training Data**: WebAuthn security vulnerabilities - **Iterations**: 500 - **Learning Rate**: 5e-06 - **Optimizer**: adamw - **Fine-tune Type**: lora ## Usage Load this adapter with the PEFT library: ```python from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM, AutoTokenizer # Load configuration and model config = PeftConfig.from_pretrained("path/to/this/adapter") base_model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path) model = PeftModel.from_pretrained(base_model, "path/to/this/adapter") # Load tokenizer tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) # Use for inference inputs = tokenizer("Analyze this WebAuthn vulnerability:", return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=100) response = tokenizer.decode(outputs[0], skip_special_tokens=True) ``` ## Conversion Notes This adapter was originally trained using MLX-LM and converted to HuggingFace PEFT format using an evidence-based conversion pipeline that: 1. Converts MLX parameter naming (`lora_a/lora_b`) to PEFT format (`lora_A.weight/lora_B.weight`) 2. Adds proper `base_model.model.` prefixes to parameter names 3. Generates PEFT-compatible configuration with required fields 4. Maintains full compatibility with HuggingFace ecosystem ## Performance This adapter enhances the base model's capability for: - WebAuthn security vulnerability analysis - Code fix generation for security issues - Security-aware code recommendations ## License Apache 2.0
inclusionAI/Ling-flash-2.0-GGUF
inclusionAI
2025-09-25T03:04:18Z
0
6
null
[ "base_model:inclusionAI/Ling-flash-2.0", "base_model:finetune:inclusionAI/Ling-flash-2.0", "license:mit", "region:us" ]
null
2025-09-24T13:22:05Z
--- license: mit base_model: - inclusionAI/Ling-flash-2.0 --- ## Introduction Use https://github.com/im0qianqian/llama.cpp to quantize. For model inference, please download our release package from this url https://github.com/im0qianqian/llama.cpp/releases . ## Quick start ```bash # Use a local model file llama-cli -m my_model.gguf # Launch OpenAI-compatible API server llama-server -m my_model.gguf ``` ## Demo ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fde26773930f07f74aaf912/MC4h9G33YjvpboRA4LPfO.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fde26773930f07f74aaf912/0YcuTJFLs6k9K4Sgzd-UD.png) ## PR Let's look forward to the following PR being merged: - [#16063 model : add BailingMoeV2 support](https://github.com/ggml-org/llama.cpp/pull/16063) - [#16028 Add support for Ling v2](https://github.com/ggml-org/llama.cpp/pull/16028)
inclusionAI/Ling-mini-2.0-GGUF
inclusionAI
2025-09-25T03:02:12Z
0
9
null
[ "gguf", "base_model:inclusionAI/Ling-mini-2.0", "base_model:quantized:inclusionAI/Ling-mini-2.0", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-24T09:15:09Z
--- license: mit base_model: - inclusionAI/Ling-mini-2.0 --- ## Introduction Use https://github.com/im0qianqian/llama.cpp to quantize. For model inference, please download our release package from this url https://github.com/im0qianqian/llama.cpp/releases . ## Quick start ```bash # Use a local model file llama-cli -m my_model.gguf # Launch OpenAI-compatible API server llama-server -m my_model.gguf ``` ## Demo ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fde26773930f07f74aaf912/MC4h9G33YjvpboRA4LPfO.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fde26773930f07f74aaf912/0YcuTJFLs6k9K4Sgzd-UD.png) ## PR Let's look forward to the following PR being merged: - [#16063 model : add BailingMoeV2 support](https://github.com/ggml-org/llama.cpp/pull/16063) - [#16028 Add support for Ling v2](https://github.com/ggml-org/llama.cpp/pull/16028)
corzamennav/blockassist-bc-territorial_wild_antelope_1758769065
corzamennav
2025-09-25T03:02:03Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "territorial wild antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-25T02:58:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - territorial wild antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AXERA-TECH/Qwen2.5-7B-Instruct
AXERA-TECH
2025-09-25T03:00:36Z
22
1
transformers
[ "transformers", "Context", "Qwen2.5-7B-Instruct-GPTQ-INT8", "Qwen2.5-7B-Instruct-GPTQ-INT4", "text-generation", "zh", "en", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
2025-06-22T14:50:14Z
--- license: mit language: - zh - en base_model: - Qwen/Qwen2.5-7B-Instruct-GPTQ-INT8 - Qwen/Qwen2.5-7B-Instruct-GPTQ-INT4 pipeline_tag: text-generation library_name: transformers tags: - Context - Qwen2.5-7B-Instruct-GPTQ-INT8 - Qwen2.5-7B-Instruct-GPTQ-INT4 --- # Qwen2.5-7B-Instruct This version of Qwen2.5-7B-Instruct has been converted to run on the Axera NPU using **w8a16** and **w4a16** quantization. This model has been optimized with the following LoRA: Compatible with Pulsar2 version: 4.1 ## Feature - Support for longer contexts, in this sample it's 2k - Support context dialogue - System prompt kvcache is supported ## Convert tools links: For those who are interested in model conversion, you can try to export axmodel through: [Pulsar2 Link, How to Convert LLM from Huggingface to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/appendix/build_llm.html) [AXera NPU AXEngine LLM Runtime](https://github.com/AXERA-TECH/ax-llm/tree/ax-context) [AXera NPU AXCL LLM Runtime](https://github.com/AXERA-TECH/ax-llm/tree/axcl-context) ### Convert script The follow show how to convert Qwen2.5-7B-Instruct-GPTQ-Int4 ``` pulsar2 llm_build --input_path Qwen/Qwen2.5-7B-Instruct-GPTQ-Int4 \ --output_path Qwen/Qwen2.5-7B-Instruct-GPTQ-Int4-ctx-ax650 \ --hidden_state_type bf16 --kv_cache_len 2047 --prefill_len 128 \ --last_kv_cache_len 128 \ --last_kv_cache_len 256 \ --last_kv_cache_len 384 \ --last_kv_cache_len 512 \ --last_kv_cache_len 640 \ --last_kv_cache_len 768 \ --last_kv_cache_len 896 \ --last_kv_cache_len 1024 \ --chip AX650 -c 1 --parallel 8 ``` ## Support Platform - AX650 - AX650N DEMO Board - [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html) - [M4N-HAT](https://wiki.sipeed.com/hardware/zh/maixIV/m4nhat/intro.html) - [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html) |Chips|w8a16|w4a16| DDR(w8) | Flash(w8) | DDR(w4) | Flash(w4) | |--|--|--|--|--|--|--| |AX650| 2.8 tokens/sec| 5.0 tokens/sec | | | 5.2GB | 5.7GB | ## How to use Download all files from this repository to the device ``` (base) axera@raspberrypi:~/samples/AXERA-TECH/Qwen2.5-7B-Instruct $ tree -L 1 . ├── config.json ├── main_api ├── main_api_ax650 ├── main_api_axcl_aarch64 ├── main_api_axcl_x86 ├── main_ax650 ├── main_axcl_aarch64 ├── main_axcl_x86 ├── post_config.json ├── qwen2.5-7b-ctx-int4-ax650 ├── qwen2.5_tokenizer ├── qwen2.5_tokenizer_uid.py ├── README.md ├── run_qwen2.5_7b_ctx_ax650.sh ├── run_qwen2.5_7b_ctx_int4_ax650.sh ├── run_qwen2.5_7b_ctx_int4_axcl_aarch64.sh └── run_qwen2.5_7b_ctx_int4_axcl_x86.sh 3 directories, 15 files ``` #### Start the Tokenizer service ``` (axcl) axera@raspberrypi:~/samples/AXERA-TECH/Qwen2.5-7B-Instruct $ python qwen2.5_tokenizer_uid.py None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. Server running at http://0.0.0.0:12345 ``` #### System prompt cache - The System prompt can be preset through the configuration file from `--system_prompt` - The System prompt can be cached in the form of kv cache to a specified folder for quick loading at the next run time from `--kvcache_path` - This folder needs to be created manually before running, for example `mkdir kvcache` ``` (base) axera@raspberrypi:~/samples/AXERA-TECH/Qwen2.5-7B-Instruct $ cat run_qwen2.5_7b_ctx_int4_axcl_aarch64.sh ./main_axcl_aarch64 \ --template_filename_axmodel "qwen2.5-7b-ctx-int4-ax650/qwen2_p128_l%d_together.axmodel" \ --axmodel_num 28 \ --url_tokenizer_model "http://0.0.0.0:12345" \ --filename_post_axmodel "qwen2.5-7b-ctx-int4-ax650/qwen2_post.axmodel" \ --filename_tokens_embed "qwen2.5-7b-ctx-int4-ax650/model.embed_tokens.weight.bfloat16.bin" \ --tokens_embed_num 152064 \ --tokens_embed_size 3584 \ --use_mmap_load_embed 1 \ --live_print 1 \ --devices 0 #--system_prompt "你的名字叫小智(allen),你是一个人畜无害的AI助手。深圳市今天(4月1日)阴天,愚人节,气温在14°C至19°C之间,微风。" \ #--kvcache_path "./kvcache" \ ``` #### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro) or AX650N DEMO Board TODO #### Inference with M.2 Accelerator card [What is M.2 Accelerator card?](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html), Show this DEMO based on Raspberry PI 5. ``` (base) axera@raspberrypi:~/samples/AXERA-TECH/Qwen2.5-7B-Instruct $ ./run_qwen2.5_7b_ctx_int4_axcl_aarch64.sh [I][ Init][ 130]: LLM init start [I][ Init][ 34]: connect http://0.0.0.0:12345 ok [I][ Init][ 57]: uid: ae9adea5-c64e-47df-92ca-29cbcc5a865f bos_id: -1, eos_id: 151645 3% | ██ | 1 / 31 [0.49s<15.16s, 2.04 count/s] tokenizer init ok[I][ Init][ 45]: LLaMaEmbedSelector use mmap 6% | ███ | 2 / 31 [0.49s<7.59s, 4.08 count/s] embed_selector init ok [I][ run][ 30]: AXCLWorker start with devid 0 54% | ████████████████████████████ █ █ █ ██ ██ | 17 / 31 [39.92s<77.35s, 0.40 count/s] init 24 axmodel ok,devid(0) remain_cmm(-1 MB) | 16 / 31 [39.92s<77.35s,100% | ████████████████████████████████ | 31 / 31 [80.60s<83.29s, 0.37 count/s] init post axmodel ok,remain_cmm(1324 MB)1891 MB) [I][ Init][ 221]: max_token_len : 2047 [I][ Init][ 224]: kv_cache_size : 512, kv_cache_num: 2047 [I][ Init][ 232]: prefill_token_num : 128 [I][ Init][ 236]: grp: 1, prefill_max_token_num : 1 [I][ Init][ 236]: grp: 2, prefill_max_token_num : 128 [I][ Init][ 236]: grp: 3, prefill_max_token_num : 256 [I][ Init][ 236]: grp: 4, prefill_max_token_num : 384 [I][ Init][ 236]: grp: 5, prefill_max_token_num : 512 [I][ Init][ 236]: grp: 6, prefill_max_token_num : 640 [I][ Init][ 236]: grp: 7, prefill_max_token_num : 768 [I][ Init][ 236]: grp: 8, prefill_max_token_num : 896 [I][ Init][ 236]: grp: 9, prefill_max_token_num : 1024 [I][ Init][ 240]: prefill_max_token_num : 1024 ________________________ | ID| remain cmm(MB)| ======================== | 0| 1324| ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯ [I][ load_config][ 282]: load config: { "enable_repetition_penalty": false, "enable_temperature": true, "enable_top_k_sampling": true, "enable_top_p_sampling": false, "penalty_window": 20, "repetition_penalty": 1.2, "temperature": 0.9, "top_k": 10, "top_p": 0.8 } [I][ Init][ 263]: LLM init ok Type "q" to exit, Ctrl+c to stop current running [I][ GenerateKVCachePrefill][ 324]: input token num : 21, prefill_split_num : 1 prefill_grpid : 2 [I][ GenerateKVCachePrefill][ 367]: input_num_token:21 [I][ main][ 234]: precompute_len: 21 [I][ main][ 235]: system_prompt: You are Qwen, created by Alibaba Cloud. You are a helpful assistant. prompt >> nice [I][ SetKVCache][ 614]: prefill_grpid:2 kv_cache_num:128 precompute_len:21 input_num_token:9 [I][ SetKVCache][ 617]: current prefill_max_token_num:896 [I][ Run][ 855]: input token num : 9, prefill_split_num : 1 [I][ Run][ 887]: input_num_token:9 [I][ Run][1016]: ttft: 928.08 ms Nice to meet you! If you have any questions or need some help, feel free to ask. [N][ Run][1168]: hit eos,avg 4.36 token/s [I][ GetKVCache][ 583]: precompute_len:50, remaining:974 prompt >> q [I][ run][ 80]: AXCLWorker exit with devid 0 (base) axera@raspberrypi:~/samples/AXERA-TECH/Qwen2.5-7B-Instruct $ ```
Lakshan2003/Phi-4-mini-instruct-customerservice
Lakshan2003
2025-09-25T02:59:29Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:unsloth/phi-4-mini-instruct-unsloth-bnb-4bit", "lora", "sft", "transformers", "trl", "unsloth", "text-generation", "conversational", "en", "dataset:Lakshan2003/customer-support-client-agent-conversations", "arxiv:1910.09700", "base_model:microsoft/Phi-4-mini-instruct", "base_model:adapter:microsoft/Phi-4-mini-instruct", "license:mit", "region:us" ]
text-generation
2025-09-25T02:07:48Z
--- base_model: - unsloth/phi-4-mini-instruct-unsloth-bnb-4bit - microsoft/Phi-4-mini-instruct library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:unsloth/phi-4-mini-instruct-unsloth-bnb-4bit - lora - sft - transformers - trl - unsloth license: mit datasets: - Lakshan2003/customer-support-client-agent-conversations language: - en --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
TuneShift-KD/qwen_mbpp_target_lora
TuneShift-KD
2025-09-25T02:58:54Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-25T02:58:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TuneShift-KD/qwen_mbpp_source_lora
TuneShift-KD
2025-09-25T02:58:29Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-25T02:58:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TuneShift-KD/gemma_qwen_gsm8k_target_lora
TuneShift-KD
2025-09-25T02:57:19Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-25T02:57:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zhuojing-huang/gpt2-german20k-english10k-baseline0
zhuojing-huang
2025-09-25T02:55:11Z
3
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-15T13:40:41Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: gpt2-german20k-english10k-baseline results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-german20k-english10k-baseline This model was trained from scratch on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 30 - training_steps: 244140 ### Training results ### Framework versions - Transformers 4.53.1 - Pytorch 2.7.1+cu126 - Datasets 3.6.0 - Tokenizers 0.21.2
atrost/math_sft_40K_trl_think_SFT_Regularized-0.5_Normalize-False
atrost
2025-09-25T02:49:05Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "sft", "trl", "conversational", "base_model:Qwen/Qwen3-1.7B-Base", "base_model:finetune:Qwen/Qwen3-1.7B-Base", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-25T00:05:06Z
--- base_model: Qwen/Qwen3-1.7B-Base library_name: transformers model_name: math_sft_40K_trl_think_SFT_Regularized-0.5_Normalize-False tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for math_sft_40K_trl_think_SFT_Regularized-0.5_Normalize-False This model is a fine-tuned version of [Qwen/Qwen3-1.7B-Base](https://huggingface.co/Qwen/Qwen3-1.7B-Base). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="atrost/math_sft_40K_trl_think_SFT_Regularized-0.5_Normalize-False", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/astrost-university-of-wisconsin-madison/sft-regularized-sft/runs/if8panql) This model was trained with SFT. ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.2 - Pytorch: 2.8.0 - Datasets: 4.1.1 - Tokenizers: 0.22.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
hieptran318204/vti-embedding-bge-v1
hieptran318204
2025-09-25T02:48:04Z
6
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:900", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:BAAI/bge-m3", "base_model:finetune:BAAI/bge-m3", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-22T16:58:52Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:900 - loss:MultipleNegativesRankingLoss base_model: BAAI/bge-m3 widget: - source_sentence: Nhân viên có thể bù công bằng những loại ngày nghỉ nào? sentences: - 'QUY TRÌNH CHẤM CÔNG,LÀM THÊM GIỜ Điều 6: Hiệu lực thi hành - Quy trình này có hiệu lực từ ngày ...' - 'NỘI QUY LAO ĐỘNG. PHẦN 2: QUY ĐỊNH CHI TIẾT. CHƯƠNG 2: THỜI GIỜ LÀM VIỆC - THỜI GIỜ NGHỈ NGƠI. Điều 14. Làm thêm giờ 14.1. Tùy theo tình hình kinh doanh, Công ty tổ chức làm thêm giờ sau khi được sự đồng ý của NLĐ nhưng phải đảm bảo tổng số giờ làm thêm theo tuần và tổng số giờ làm việc bình thường không quá 12 giờ 01 ngày;...' - 'III. 2. Quy tắc tính thời gian làm việc và cách tính vi phạm a) Thời gian làm việc: i) Ca làm việc tiêu chuẩn, không cần đăng ký, hệ thống mặc định: 08:30-17:30 ii) Ca làm việc khác tiêu chuẩn: Ca làm việc: Từ 08:00 đến 17:00 hoặc; Ca làm việc: Từ 09:00 đến 18:00 Đối với ca làm việc khác ca tiêu chuẩn, muộn nhất ngày cuối cùng của tháng, nhân viên log VMS đăng ký ca làm việc cho tháng kế tiếp. Và...' - source_sentence: Điều 3 của quy trình hỗ trợ nhân viên mới có mục đích gì? sentences: - QUY TRÌNH HỖ TRỢ NHÂN VIÊN MỚI ONBOARD. Điều 3. Giải thích một số định nghĩa, thuật ngữ - Công ty Là khái niệm chung chỉ tất cả các công ty thuộc VTI Group. - Onboard - OB Là quá trình kể từ khi nhân viên nhận offer tại công ty, cho đến khi nhân viên làm việc tại VTI trong vòng 01 tháng. - Hệ thống VMS Hệ thống quản trị nội bộ VTI.... - 'NỘI QUY LAO ĐỘNG. PHẦN 2: QUY ĐỊNH CHI TIẾT. CHƯƠNG 3: HỌC NGHỀ, ĐÀO TẠO, BỒI DƯỠNG NÂNG CAO TRÌNH ĐỘ KỸ NĂNG NGHỀ. Điều 23. Đào tạo- học nghề 23.1. Tùy theo tính chất đặc biệt của một số công việc, Công ty có thể tuyển NLĐ chưa có kinh nghiệm vào học nghề,...' - 'NỘI QUY LAO ĐỘNG. PHẦN 2: QUY ĐỊNH CHI TIẾT. CHƯƠNG 2: THỜI GIỜ LÀM VIỆC - THỜI GIỜ NGHỈ NGƠI. Điều 13. Thời giờ làm việc 13.1. Thời giờ làm việc bình thường không quá 08 giờ trong 01 ngày và 44 giờ trong 01 tuần. Buổi sáng: 08h30 – 12h00 Nghỉ trưa: 12h00 – 13h00 Buổi chiều: 13h00 – 17h30 Nghỉ giải lao: Sáng 10 phút, Chiều 10 phút Công ty có quyền quy định làm việc theo giờ hoặc ngày hoặc tuần phù hợp với tình hình kinh doanh thực tế từng thời kỳ; trường hợp theo tuần thì thời giờ làm việc bình thường không quá 10 giờ trong 01 ngày, nhưng không quá 44 giờ trong 01 tuần...' - source_sentence: Chế độ 60 phút/ngày được áp dụng cho đối tượng nào? sentences: - 'NỘI QUY LAO ĐỘNG. PHẦN 2: QUY ĐỊNH CHI TIẾT. CHƯƠNG 1: THỬ VIỆC, HỢP ĐỒNG LAO ĐỘNG. Điều 6. Quy định về thử việc 6.1. Thời gian thử việc NLĐ được Công ty tuyển dụng có thể trải qua một (01) lần thử việc tùy thuộc vào thỏa thuận của Công ty và NLĐ...' - '## III.3. Hướng dẫn log tool VMS ### a) Chỉ khi cần request bù công do đi muộn / về sớm Cách 1: Vào màn hình Attendance. Kiểm tra công trong ngày/tháng. Nếu có nhu cầu quy đổi từ nghỉ bù/ nghỉ phép click Absence chọn loại quy đổi phù hợp. Cách 2: Vào Leaves My Leaves Absence Request Create...' - QUY TRÌNH HỖ TRỢ NHÂN VIÊN MỚI ONBOARD. Điều 2. Phạm vi và đối tượng áp dụng - Quy định áp dụng cho khối hỗ trợ tiếp đón nhân viên mới đến làm việc tại công ty - Áp dụng cho toàn bộ nhân viên mới đến làm việc tại Công ty Cổ phần VTI, Công ty TNHH VTI Education, Công ty TNHH Trainocate, Công ty TNHH GITS. - source_sentence: Quy trình hỗ trợ nhân viên mới có áp dụng cho Công ty Cổ phần VTI không? sentences: - 'QUY TRÌNH CHẤM CÔNG,LÀM THÊM GIỜ Điều 6: Hiệu lực thi hành - Quy trình này có hiệu lực từ ngày 01/05/2023, ban hành lần đầu. - Các quy trình khác với quy trình này sẽ được xác nhận và được sự đồng ý phê duyệt của Ban Lãnh đạo hoặc người có thẩm quyền. - Trong quá trình thực hiện, nếu có vướng mắc, Phòng Nhân sự công ty sẽ có báo cáo và đề nghị Ban Giám đốc xem xét sửa đổi, bổ sung cho phù hợp. - Các trường hợp bổ sung, sửa đổi sẽ được cập nhật và ban hành ở các phiên bản sau.' - 'NỘI QUY LAO ĐỘNG. PHẦN 2: QUY ĐỊNH CHI TIẾT. CHƯƠNG 1: THỬ VIỆC, HỢP ĐỒNG LAO ĐỘNG. Điều 9. Tạm hoãn hoặc chấm dứt HĐLĐ 9.1. HĐLĐ được tạm hoãn trong các trường hợp theo quy định của Pháp Luật Lao Động. Trong thời gian tạm hoãn thực hiện HĐLĐ, NLĐ không được hưởng lương và quyền, lợi ích đã giao kết trong HĐLĐ, trừ trường hợp NLĐ và Công ty có thỏa thuận rõ ràng tại HĐLĐ hoặc văn bản khác hoặc Pháp Luật Lao Động...' - QUY TRÌNH HỖ TRỢ NHÂN VIÊN MỚI ONBOARD. Điều 2. Phạm vi và đối tượng áp dụng - Quy định áp dụng cho khối hỗ trợ tiếp đón nhân viên mới đến làm việc tại công ty - Áp dụng cho toàn bộ nhân viên mới đến làm việc tại Công ty Cổ phần VTI, Công ty TNHH VTI Education, Công ty TNHH Trainocate, Công ty TNHH GITS. - source_sentence: Điều 21 quy định về vấn đề gì? sentences: - 'NỘI QUY LAO ĐỘNG. PHẦN 2: QUY ĐỊNH CHI TIẾT. CHƯƠNG 1: THỬ VIỆC, HỢP ĐỒNG LAO ĐỘNG. Điều 8. Chuyển NLĐ làm công việc khác so với HĐLĐ 8.1. Công ty được quyền tạm thời chuyển NLĐ làm công việc khác so với HĐLĐ theo quy định sau: (a) Điều kiện: khi Công ty gặp khó khăn đột xuất do thiên tai, hỏa hoạn, dịch bệnh nguy hiểm,...' - 'NỘI QUY LAO ĐỘNG. PHẦN 2: QUY ĐỊNH CHI TIẾT. CHƯƠNG 2: THỜI GIỜ LÀM VIỆC - THỜI GIỜ NGHỈ NGƠI. Điều 21. Thủ tục nghỉ phép Ngoại trừ các trường hợp nghỉ đột xuất như quy định tại Khoản 18.3 Nội quy Lao động hoặc đi trễ về sớm như quy định tại Khoản 18.4 Nội quy Lao động...' - 'NỘI QUY LAO ĐỘNG. PHẦN 2: QUY ĐỊNH CHI TIẾT. CHƯƠNG 2: THỜI GIỜ LÀM VIỆC - THỜI GIỜ NGHỈ NGƠI. Điều 19. Nghỉ thai sản NLĐ được nghỉ hưởng chế độ thai sản...' pipeline_tag: sentence-similarity library_name: sentence-transformers --- # SentenceTransformer based on BAAI/bge-m3 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 --> - **Maximum Sequence Length:** 8192 tokens - **Output Dimensionality:** 1024 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("hieptran318204/vti-embedding-gemma-v1") # Run inference sentences = [ 'Điều 21 quy định về vấn đề gì?', 'NỘI QUY LAO ĐỘNG. PHẦN 2: QUY ĐỊNH CHI TIẾT. CHƯƠNG 2: THỜI GIỜ LÀM VIỆC - THỜI GIỜ NGHỈ NGƠI. Điều 21. Thủ tục nghỉ phép Ngoại trừ các trường hợp nghỉ đột xuất như quy định tại Khoản 18.3 Nội quy Lao động hoặc đi trễ về sớm như quy định tại Khoản 18.4 Nội quy Lao động, khi có nhu cầu nghỉ hằng năm theo quy định tại Điểu 20 và nghỉ việc riêng không hưởng lương theo quy định tại điều 17.2 (b) NLĐ phải tuân thủ các thủ tục xin nghỉ như sau: 21.1. NLĐ phải đăng ký nghỉ phép trước ngày nghỉ thực tế như Quy định về Quản lý chấm công, làm thêm giờ và chế độ nghỉ của Công ty. NLĐ nữ cần phải thông báo cho CBQL khi biết mình có thai và nêu thời gian nghỉ thai sản dự kiến để Công ty có thể bố trí nhân sự và phân công công việc để đảm bảo công việc không bị gián đoạn trong thời gian NLĐ nữ nghỉ thai sản. NLĐ nữ cần thông báo thời gian nghỉ trước ít nhất một (01) tháng tính từ thời gian dự sinh cho CBQL và Phòng Nhân sự. Nếu ngày dự sinh và ngày thực tế không trùng nhau. NLĐ nữ thông báo cho Phòng Nhân sự về ngày thực tế trong vòng mười (10) ngày kể từ ngày thực tế sinh. 21.2. Cách đăng ký nghỉ phép: (a) NLĐ đăng ký nghỉ phép thông qua hệ thống quản trị nội bộ VMS (Trong trường hợp hệ thống quản trị nội bộ VMS bị lỗi, Công ty xem xét cho NLĐ đăng ký nghỉ phép qua email hoặc đơn xin nghỉ phép gửi cho CBQL phê duyệt). NLĐ phải đảm bảo đăng ký nghỉ phép phải được phê duyệt và chuyển thông tin tới phòng Nhân sự. (b) Phê duyệt nghỉ phép phải được thực hiện và thông báo đến Phòng Nhân sự (trong trường hợp đăng ký bằng email) ngay khi NLĐ nhận được phê duyệt và trước kỳ lương cuối tháng. 21.3. Các trường hợp nghỉ đột xuất như ốm đau, cấp cứu, tang gia..., NLĐ phải thông báo cho cấp quản lý trực tiếp biết càng sớm càng tốt bằng điện thoại hoặc email hoặc thông qua đồng nghiệp của mình. Ngay sau khi kết thúc đợt nghỉ, NLĐ phải thực hiện các thủ tục quy định tại Nội quy Lao động này. Trường hợp nghỉ do ốm đau, NLĐ có nghĩa vụ nộp kèm theo Giấy chứng nhận của bác sĩ do một bệnh viện được thành lập hợp pháp tại Việt Nam cấp. Trong trường hợp NLĐ thông báo nghỉ ốm nhưng lại không có giấy chứng nhận nghỉ ốm của bác sĩ thì sẽ coi như nghỉ không có lý do chính đáng. 21.4. NLĐ làm việc theo thời gian quy định vi bất cứ lý do gì mà đến nơi làm việc sau giờ quy định hoặc rời khỏi nơi làm việc trước giờ quy định hoặc vắng mặt một khoảng thời gian trong giờ làm việc đều phải xin phép cấp quản lý trực tiếp. Việc không xin phép sẽ xem như nghỉ việc/tự ý bỏ việc không có lý do chính đáng theo quy định tại Điều 22 Nội quy Lao động. Theo đó, NLĐ, ngoài hình thức kỷ luật lao động mà Công ty sẽ xem xét áp dụng, sẽ không được trả lương cho thời gian đi trễ, về sớm hoặc vắng mặt đó.', 'NỘI QUY LAO ĐỘNG. PHẦN 2: QUY ĐỊNH CHI TIẾT. CHƯƠNG 2: THỜI GIỜ LÀM VIỆC - THỜI GIỜ NGHỈ NGƠI. Điều 19. Nghỉ thai sản NLĐ được nghỉ hưởng chế độ thai sản trong các trường hợp sau: - Khám thai - Sẩy thai, nạo, hút thai, thai chết lưu hoặc phá thai bệnh lý - Sinh con, con chết sau khi sinh - Mang thai hộ - Nhận nuôi con nuôi - Thực hiện các biện pháp tránh thai Tất cả các trường hợp trên đêu theo Quy định của Luật Bảo hiểm xã hội hiện hành.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 900 training samples * Columns: <code>anchor</code> and <code>positive</code> * Approximate statistics based on the first 900 samples: | | anchor | positive | |:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 11 tokens</li><li>mean: 20.8 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 54 tokens</li><li>mean: 479.18 tokens</li><li>max: 1190 tokens</li></ul> | * Samples: | anchor | positive | |:-----------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Mục đích chính của tài liệu Hướng dẫn quy định chấm công là gì?</code> | <code>I. MỤC ĐÍCH của tài liệu Hướng dẫn quy định chấm công Hướng dẫn này được xây dựng nhằm đảm bảo việc thực hiện đúng theo “Quy định thời gian làm việc, làm thêm giờ và chế độ nghỉ” Mã TL: 03/2025/QyĐ-VTI, hiệu lực ngày 01/05/2025, góp phần nâng cao hiệu quả công việc và đảm bảo quyền lợi cho tất cả nhân viên trong Công ty.</code> | | <code>Hướng dẫn này được xây dựng dựa trên quy định nào của công ty?</code> | <code>I. MỤC ĐÍCH của tài liệu Hướng dẫn quy định chấm công Hướng dẫn này được xây dựng nhằm đảm bảo việc thực hiện đúng theo “Quy định thời gian làm việc, làm thêm giờ và chế độ nghỉ” Mã TL: 03/2025/QyĐ-VTI, hiệu lực ngày 01/05/2025, góp phần nâng cao hiệu quả công việc và đảm bảo quyền lợi cho tất cả nhân viên trong Công ty.</code> | | <code>Tài liệu Hướng dẫn quy định chấm công có mã số bao nhiêu?</code> | <code>I. MỤC ĐÍCH của tài liệu Hướng dẫn quy định chấm công Hướng dẫn này được xây dựng nhằm đảm bảo việc thực hiện đúng theo “Quy định thời gian làm việc, làm thêm giờ và chế độ nghỉ” Mã TL: 03/2025/QyĐ-VTI, hiệu lực ngày 01/05/2025, góp phần nâng cao hiệu quả công việc và đảm bảo quyền lợi cho tất cả nhân viên trong Công ty.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 2 - `learning_rate`: 1e-05 - `num_train_epochs`: 5 - `warmup_ratio`: 0.1 - `fp16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 2 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 1e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | |:------:|:----:|:-------------:| | 0.2222 | 50 | 0.3243 | | 0.4444 | 100 | 0.2173 | | 0.6667 | 150 | 0.1512 | | 0.8889 | 200 | 0.1577 | | 1.1111 | 250 | 0.1922 | | 1.3333 | 300 | 0.1553 | | 1.5556 | 350 | 0.0871 | | 1.7778 | 400 | 0.1744 | | 2.0 | 450 | 0.0735 | | 2.2222 | 500 | 0.1036 | | 2.4444 | 550 | 0.1207 | | 2.6667 | 600 | 0.1486 | | 2.8889 | 650 | 0.1309 | | 3.1111 | 700 | 0.1585 | | 3.3333 | 750 | 0.1154 | | 3.5556 | 800 | 0.0356 | | 3.7778 | 850 | 0.1242 | | 4.0 | 900 | 0.0692 | | 4.2222 | 950 | 0.0791 | | 4.4444 | 1000 | 0.1345 | | 4.6667 | 1050 | 0.1107 | | 4.8889 | 1100 | 0.0569 | ### Framework Versions - Python: 3.11.13 - Sentence Transformers: 4.1.0 - Transformers: 4.52.4 - PyTorch: 2.6.0+cu124 - Accelerate: 1.8.1 - Datasets: 3.6.0 - Tokenizers: 0.21.2 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
zhnuc/high_quality_dpo_2_cp200
zhnuc
2025-09-25T02:46:09Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-25T02:42:16Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hientan105/blockassist-bc-lanky_amphibious_squirrel_1758767359
hientan105
2025-09-25T02:44:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "lanky amphibious squirrel", "arxiv:2504.07091", "region:us" ]
null
2025-09-25T02:44:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - lanky amphibious squirrel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lyl472324464/twist-new
lyl472324464
2025-09-25T02:44:06Z
0
0
null
[ "LeRobot", "license:apache-2.0", "region:us" ]
null
2025-09-25T02:40:28Z
--- license: apache-2.0 task_categories: - robotics tags: - LeRobot configs: - config_name: default data_files: data/*/*.parquet --- This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "aloha", "total_episodes": 2, "total_frames": 1600, "total_tasks": 1, "total_videos": 0, "total_chunks": 1, "chunks_size": 1000, "fps": 50, "splits": { "train": "0:2" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "observation.state": { "dtype": "float32", "shape": [ 14 ], "names": [ [ "right_waist", "right_shoulder", "right_elbow", "right_forearm_roll", "right_wrist_angle", "right_wrist_rotate", "right_gripper", "left_waist", "left_shoulder", "left_elbow", "left_forearm_roll", "left_wrist_angle", "left_wrist_rotate", "left_gripper" ] ] }, "action": { "dtype": "float32", "shape": [ 14 ], "names": [ [ "right_waist", "right_shoulder", "right_elbow", "right_forearm_roll", "right_wrist_angle", "right_wrist_rotate", "right_gripper", "left_waist", "left_shoulder", "left_elbow", "left_forearm_roll", "left_wrist_angle", "left_wrist_rotate", "left_gripper" ] ] }, "observation.velocity": { "dtype": "float32", "shape": [ 14 ], "names": [ [ "right_waist", "right_shoulder", "right_elbow", "right_forearm_roll", "right_wrist_angle", "right_wrist_rotate", "right_gripper", "left_waist", "left_shoulder", "left_elbow", "left_forearm_roll", "left_wrist_angle", "left_wrist_rotate", "left_gripper" ] ] }, "observation.effort": { "dtype": "float32", "shape": [ 14 ], "names": [ [ "right_waist", "right_shoulder", "right_elbow", "right_forearm_roll", "right_wrist_angle", "right_wrist_rotate", "right_gripper", "left_waist", "left_shoulder", "left_elbow", "left_forearm_roll", "left_wrist_angle", "left_wrist_rotate", "left_gripper" ] ] }, "observation.images.cam_high": { "dtype": "image", "shape": [ 3, 480, 640 ], "names": [ "channels", "height", "width" ] }, "observation.images.cam_low": { "dtype": "image", "shape": [ 3, 480, 640 ], "names": [ "channels", "height", "width" ] }, "observation.images.cam_left_wrist": { "dtype": "image", "shape": [ 3, 480, 640 ], "names": [ "channels", "height", "width" ] }, "observation.images.cam_right_wrist": { "dtype": "image", "shape": [ 3, 480, 640 ], "names": [ "channels", "height", "width" ] }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
Shani123/pi0_so101_pickup_toothpicks_2_plus_recovery_20000
Shani123
2025-09-25T02:44:00Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "pi0", "dataset:Shani123/pickup_toothpicks_2_plus_recovery", "arxiv:2410.24164", "license:apache-2.0", "region:us" ]
robotics
2025-09-25T02:42:23Z
--- datasets: Shani123/pickup_toothpicks_2_plus_recovery library_name: lerobot license: apache-2.0 model_name: pi0 pipeline_tag: robotics tags: - robotics - pi0 - lerobot --- # Model Card for pi0 <!-- Provide a quick summary of what the model is/does. --> [Pi0](https://huggingface.co/papers/2410.24164) is a generalist vision-language-action transformer that converts multimodal observations and text instructions into robot actions for zero-shot task transfer. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash lerobot-train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash lerobot-record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
orionshaw777/codeparrot-ds
orionshaw777
2025-09-25T02:32:55Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T08:41:20Z
--- library_name: transformers license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: codeparrot-ds results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codeparrot-ds This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5689 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.2796 | 0.9264 | 5000 | 1.5689 | ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0
corzamennav/blockassist-bc-territorial_wild_antelope_1758767216
corzamennav
2025-09-25T02:29:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "territorial wild antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-25T02:27:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - territorial wild antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/Crystal-Moon-12B-GGUF
mradermacher
2025-09-25T02:27:43Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "roleplay", "en", "base_model:Vortex5/Crystal-Moon-12B", "base_model:quantized:Vortex5/Crystal-Moon-12B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-24T10:49:53Z
--- base_model: Vortex5/Crystal-Moon-12B language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - mergekit - merge - roleplay --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/Vortex5/Crystal-Moon-12B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Crystal-Moon-12B-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Crystal-Moon-12B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Crystal-Moon-12B-GGUF/resolve/main/Crystal-Moon-12B.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Crystal-Moon-12B-GGUF/resolve/main/Crystal-Moon-12B.Q3_K_S.gguf) | Q3_K_S | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/Crystal-Moon-12B-GGUF/resolve/main/Crystal-Moon-12B.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Crystal-Moon-12B-GGUF/resolve/main/Crystal-Moon-12B.Q3_K_L.gguf) | Q3_K_L | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/Crystal-Moon-12B-GGUF/resolve/main/Crystal-Moon-12B.IQ4_XS.gguf) | IQ4_XS | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/Crystal-Moon-12B-GGUF/resolve/main/Crystal-Moon-12B.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Crystal-Moon-12B-GGUF/resolve/main/Crystal-Moon-12B.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Crystal-Moon-12B-GGUF/resolve/main/Crystal-Moon-12B.Q5_K_S.gguf) | Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/Crystal-Moon-12B-GGUF/resolve/main/Crystal-Moon-12B.Q5_K_M.gguf) | Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/Crystal-Moon-12B-GGUF/resolve/main/Crystal-Moon-12B.Q6_K.gguf) | Q6_K | 10.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Crystal-Moon-12B-GGUF/resolve/main/Crystal-Moon-12B.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
reinforce-flow/qwen2.5math-1.5b-gen8-global-meanvar-nostd-iter-360
reinforce-flow
2025-09-25T02:26:44Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-25T02:26:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nick1880/blockassist
nick1880
2025-09-25T02:21:49Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "barky powerful falcon", "arxiv:2504.07091", "region:us" ]
null
2025-09-13T06:23:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - barky powerful falcon --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
wan-world/Wan2.1-I2V-14B-480P
wan-world
2025-09-25T02:17:22Z
0
0
diffusers
[ "diffusers", "safetensors", "i2v", "video", "video-generation", "image-to-video", "en", "zh", "license:apache-2.0", "region:us" ]
image-to-video
2025-09-25T02:17:22Z
--- license: apache-2.0 language: - en - zh pipeline_tag: image-to-video library_name: diffusers tags: - video - video-generation --- # Wan2.1 <p align="center"> <img src="assets/logo.png" width="400"/> <p> <p align="center"> 💜 <a href=""><b>Wan</b></a> &nbsp&nbsp | &nbsp&nbsp 🖥️ <a href="https://github.com/Wan-Video/Wan2.1">GitHub</a> &nbsp&nbsp | &nbsp&nbsp🤗 <a href="https://huggingface.co/Wan-AI/">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp🤖 <a href="https://modelscope.cn/organization/Wan-AI">ModelScope</a>&nbsp&nbsp | &nbsp&nbsp 📑 <a href="">Paper (Coming soon)</a> &nbsp&nbsp | &nbsp&nbsp 📑 <a href="https://wanxai.com">Blog</a> &nbsp&nbsp | &nbsp&nbsp💬 <a href="https://gw.alicdn.com/imgextra/i2/O1CN01tqjWFi1ByuyehkTSB_!!6000000000015-0-tps-611-1279.jpg">WeChat Group</a>&nbsp&nbsp | &nbsp&nbsp 📖 <a href="https://discord.gg/p5XbdQV7">Discord</a>&nbsp&nbsp <br> ----- [**Wan: Open and Advanced Large-Scale Video Generative Models**]() <be> In this repository, we present **Wan2.1**, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation. **Wan2.1** offers these key features: - 👍 **SOTA Performance**: **Wan2.1** consistently outperforms existing open-source models and state-of-the-art commercial solutions across multiple benchmarks. - 👍 **Supports Consumer-grade GPUs**: The T2V-1.3B model requires only 8.19 GB VRAM, making it compatible with almost all consumer-grade GPUs. It can generate a 5-second 480P video on an RTX 4090 in about 4 minutes (without optimization techniques like quantization). Its performance is even comparable to some closed-source models. - 👍 **Multiple Tasks**: **Wan2.1** excels in Text-to-Video, Image-to-Video, Video Editing, Text-to-Image, and Video-to-Audio, advancing the field of video generation. - 👍 **Visual Text Generation**: **Wan2.1** is the first video model capable of generating both Chinese and English text, featuring robust text generation that enhances its practical applications. - 👍 **Powerful Video VAE**: **Wan-VAE** delivers exceptional efficiency and performance, encoding and decoding 1080P videos of any length while preserving temporal information, making it an ideal foundation for video and image generation. This repo contains our I2V-14B model, which is capable of generating 480P videos, offering advantages in terms of fast generation and excellent quality. ## Video Demos <div align="center"> <video width="80%" controls> <source src="https://cloud.video.taobao.com/vod/Jth64Y7wNoPcJki_Bo1ZJTDBvNjsgjlVKsNs05Fqfps.mp4" type="video/mp4"> Your browser does not support the video tag. </video> </div> ## 🔥 Latest News!! * Feb 25, 2025: 👋 We've released the inference code and weights of Wan2.1. ## 📑 Todo List - Wan2.1 Text-to-Video - [x] Multi-GPU Inference code of the 14B and 1.3B models - [x] Checkpoints of the 14B and 1.3B models - [x] Gradio demo - [ ] Diffusers integration - [ ] ComfyUI integration - Wan2.1 Image-to-Video - [x] Multi-GPU Inference code of the 14B model - [x] Checkpoints of the 14B model - [x] Gradio demo - [ ] Diffusers integration - [ ] ComfyUI integration ## Quickstart #### Installation Clone the repo: ``` git clone https://github.com/Wan-Video/Wan2.1.git cd Wan2.1 ``` Install dependencies: ``` # Ensure torch >= 2.4.0 pip install -r requirements.txt ``` #### Model Download | Models | Download Link | Notes | | --------------|-------------------------------------------------------------------------------|-------------------------------| | T2V-14B | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B) 🤖 [ModelScope](https://www.modelscope.cn/models/Wan-AI/Wan2.1-T2V-14B) | Supports both 480P and 720P | I2V-14B-720P | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-720P) 🤖 [ModelScope](https://www.modelscope.cn/models/Wan-AI/Wan2.1-I2V-14B-720P) | Supports 720P | I2V-14B-480P | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-480P) 🤖 [ModelScope](https://www.modelscope.cn/models/Wan-AI/Wan2.1-I2V-14B-480P) | Supports 480P | T2V-1.3B | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B) 🤖 [ModelScope](https://www.modelscope.cn/models/Wan-AI/Wan2.1-T2V-1.3B) | Supports 480P > 💡Note: The 1.3B model is capable of generating videos at 720P resolution. However, due to limited training at this resolution, the results are generally less stable compared to 480P. For optimal performance, we recommend using 480P resolution. Download models using 🤗 huggingface-cli: ``` pip install "huggingface_hub[cli]" huggingface-cli download Wan-AI/Wan2.1-I2V-14B-480P --local-dir ./Wan2.1-I2V-14B-480P ``` Download models using 🤖 modelscope-cli: ``` pip install modelscope modelscope download Wan-AI/Wan2.1-I2V-14B-480P --local_dir ./Wan2.1-I2V-14B-480P ``` #### Run Image-to-Video Generation Similar to Text-to-Video, Image-to-Video is also divided into processes with and without the prompt extension step. The specific parameters and their corresponding settings are as follows: <table> <thead> <tr> <th rowspan="2">Task</th> <th colspan="2">Resolution</th> <th rowspan="2">Model</th> </tr> <tr> <th>480P</th> <th>720P</th> </tr> </thead> <tbody> <tr> <td>i2v-14B</td> <td style="color: green;">❌</td> <td style="color: green;">✔️</td> <td>Wan2.1-I2V-14B-720P</td> </tr> <tr> <td>i2v-14B</td> <td style="color: green;">✔️</td> <td style="color: red;">❌</td> <td>Wan2.1-T2V-14B-480P</td> </tr> </tbody> </table> ##### (1) Without Prompt Extention - Single-GPU inference ``` python generate.py --task i2v-14B --size 832*480 --ckpt_dir ./Wan2.1-I2V-14B-480P --image examples/i2v_input.JPG --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside." ``` > 💡For the Image-to-Video task, the `size` parameter represents the area of the generated video, with the aspect ratio following that of the original input image. - Multi-GPU inference using FSDP + xDiT USP ``` pip install "xfuser>=0.4.1" torchrun --nproc_per_node=8 generate.py --task i2v-14B --size 832*480 --ckpt_dir ./Wan2.1-I2V-14B-480P --image examples/i2v_input.JPG --dit_fsdp --t5_fsdp --ulysses_size 8 --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside." ``` ##### (2) Using Prompt Extention Run with local prompt extention using `Qwen/Qwen2.5-VL-7B-Instruct`: ``` python generate.py --task i2v-14B --size 832*480 --ckpt_dir ./Wan2.1-I2V-14B-480P --image examples/i2v_input.JPG --use_prompt_extend --prompt_extend_model Qwen/Qwen2.5-VL-7B-Instruct --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside." ``` Run with remote prompt extention using `dashscope`: ``` DASH_API_KEY=your_key python generate.py --task i2v-14B --size 832*480 --ckpt_dir ./Wan2.1-I2V-14B-480P --image examples/i2v_input.JPG --use_prompt_extend --prompt_extend_method 'dashscope' --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside." ``` ##### (3) Runing local gradio ``` cd gradio # if one only uses 480P model in gradio DASH_API_KEY=your_key python i2v_14B_singleGPU.py --prompt_extend_method 'dashscope' --ckpt_dir_480p ./Wan2.1-I2V-14B-480P # if one only uses 720P model in gradio DASH_API_KEY=your_key python i2v_14B_singleGPU.py --prompt_extend_method 'dashscope' --ckpt_dir_720p ./Wan2.1-I2V-14B-720P # if one uses both 480P and 720P models in gradio DASH_API_KEY=your_key python i2v_14B_singleGPU.py --prompt_extend_method 'dashscope' --ckpt_dir_480p ./Wan2.1-I2V-14B-480P --ckpt_dir_720p ./Wan2.1-I2V-14B-720P ``` ## Manual Evaluation We conducted extensive manual evaluations to evaluate the performance of the Image-to-Video model, and the results are presented in the table below. The results clearly indicate that **Wan2.1** outperforms both closed-source and open-source models. <div align="center"> <img src="assets/i2v_res.png" alt="" style="width: 80%;" /> </div> ## Computational Efficiency on Different GPUs We test the computational efficiency of different **Wan2.1** models on different GPUs in the following table. The results are presented in the format: **Total time (s) / peak GPU memory (GB)**. <div align="center"> <img src="assets/comp_effic.png" alt="" style="width: 80%;" /> </div> > The parameter settings for the tests presented in this table are as follows: > (1) For the 1.3B model on 8 GPUs, set `--ring_size 8` and `--ulysses_size 1`; > (2) For the 14B model on 1 GPU, use `--offload_model True`; > (3) For the 1.3B model on a single 4090 GPU, set `--offload_model True --t5_cpu`; > (4) For all testings, no prompt extension was applied, meaning `--use_prompt_extend` was not enabled. ------- ## Introduction of Wan2.1 **Wan2.1** is designed on the mainstream diffusion transformer paradigm, achieving significant advancements in generative capabilities through a series of innovations. These include our novel spatio-temporal variational autoencoder (VAE), scalable training strategies, large-scale data construction, and automated evaluation metrics. Collectively, these contributions enhance the model’s performance and versatility. ##### (1) 3D Variational Autoencoders We propose a novel 3D causal VAE architecture, termed **Wan-VAE** specifically designed for video generation. By combining multiple strategies, we improve spatio-temporal compression, reduce memory usage, and ensure temporal causality. **Wan-VAE** demonstrates significant advantages in performance efficiency compared to other open-source VAEs. Furthermore, our **Wan-VAE** can encode and decode unlimited-length 1080P videos without losing historical temporal information, making it particularly well-suited for video generation tasks. <div align="center"> <img src="assets/video_vae_res.jpg" alt="" style="width: 80%;" /> </div> ##### (2) Video Diffusion DiT **Wan2.1** is designed using the Flow Matching framework within the paradigm of mainstream Diffusion Transformers. Our model's architecture uses the T5 Encoder to encode multilingual text input, with cross-attention in each transformer block embedding the text into the model structure. Additionally, we employ an MLP with a Linear layer and a SiLU layer to process the input time embeddings and predict six modulation parameters individually. This MLP is shared across all transformer blocks, with each block learning a distinct set of biases. Our experimental findings reveal a significant performance improvement with this approach at the same parameter scale. <div align="center"> <img src="assets/video_dit_arch.jpg" alt="" style="width: 80%;" /> </div> | Model | Dimension | Input Dimension | Output Dimension | Feedforward Dimension | Frequency Dimension | Number of Heads | Number of Layers | |--------|-----------|-----------------|------------------|-----------------------|---------------------|-----------------|------------------| | 1.3B | 1536 | 16 | 16 | 8960 | 256 | 12 | 30 | | 14B | 5120 | 16 | 16 | 13824 | 256 | 40 | 40 | ##### Data We curated and deduplicated a candidate dataset comprising a vast amount of image and video data. During the data curation process, we designed a four-step data cleaning process, focusing on fundamental dimensions, visual quality and motion quality. Through the robust data processing pipeline, we can easily obtain high-quality, diverse, and large-scale training sets of images and videos. ![figure1](assets/data_for_diff_stage.jpg "figure1") ##### Comparisons to SOTA We compared **Wan2.1** with leading open-source and closed-source models to evaluate the performace. Using our carefully designed set of 1,035 internal prompts, we tested across 14 major dimensions and 26 sub-dimensions. We then compute the total score by performing a weighted calculation on the scores of each dimension, utilizing weights derived from human preferences in the matching process. The detailed results are shown in the table below. These results demonstrate our model's superior performance compared to both open-source and closed-source models. ![figure1](assets/vben_vs_sota.png "figure1") ## Citation If you find our work helpful, please cite us. ``` @article{wan2.1, title = {Wan: Open and Advanced Large-Scale Video Generative Models}, author = {Wan Team}, journal = {}, year = {2025} } ``` ## License Agreement The models in this repository are licensed under the Apache 2.0 License. We claim no rights over the your generate contents, granting you the freedom to use them while ensuring that your usage complies with the provisions of this license. You are fully accountable for your use of the models, which must not involve sharing any content that violates applicable laws, causes harm to individuals or groups, disseminates personal information intended for harm, spreads misinformation, or targets vulnerable populations. For a complete list of restrictions and details regarding your rights, please refer to the full text of the [license](LICENSE.txt). ## Acknowledgements We would like to thank the contributors to the [SD3](https://huggingface.co/stabilityai/stable-diffusion-3-medium), [Qwen](https://huggingface.co/Qwen), [umt5-xxl](https://huggingface.co/google/umt5-xxl), [diffusers](https://github.com/huggingface/diffusers) and [HuggingFace](https://huggingface.co) repositories, for their open research. ## Contact Us If you would like to leave a message to our research or product teams, feel free to join our [Discord](https://discord.gg/p5XbdQV7) or [WeChat groups](https://gw.alicdn.com/imgextra/i2/O1CN01tqjWFi1ByuyehkTSB_!!6000000000015-0-tps-611-1279.jpg)!
wan-world/umt5-xxl
wan-world
2025-09-25T02:17:04Z
0
0
null
[ "pytorch", "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu", "dataset:mc4", "license:apache-2.0", "region:us" ]
null
2025-09-25T02:17:04Z
--- language: - multilingual - af - am - ar - az - be - bg - bn - ca - ceb - co - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fil - fr - fy - ga - gd - gl - gu - ha - haw - hi - hmn - ht - hu - hy - ig - is - it - iw - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lb - lo - lt - lv - mg - mi - mk - ml - mn - mr - ms - mt - my - ne - nl - no - ny - pa - pl - ps - pt - ro - ru - sd - si - sk - sl - sm - sn - so - sq - sr - st - su - sv - sw - ta - te - tg - th - tr - uk - und - ur - uz - vi - xh - yi - yo - zh - zu datasets: - mc4 license: apache-2.0 --- [Google's UMT5](https://github.com/google-research/multilingual-t5) UMT5 is pretrained on the an updated version of [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 107 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu. **Note**: UMT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) Other Community Checkpoints: [here](https://huggingface.co/models?search=umt5) Paper: [UniMax, Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining](https://openreview.net/forum?id=kXwdL1cWOAi) Authors: *by Hyung Won Chung, Xavier Garcia, Adam Roberts, Yi Tay, Orhan Firat, Sharan Narang, Noah Constant* ## Abstract *Pretrained multilingual large language models have typically used heuristic temperature-based sampling to balance between different languages. However previous work has not systematically evaluated the efficacy of different pretraining language distributions across model scales. In this paper, we propose a new sampling method, UniMax, that delivers more uniform coverage of head languages while mitigating overfitting on tail languages by explicitly capping the number of repeats over each language's corpus. We perform an extensive series of ablations testing a range of sampling strategies on a suite of multilingual benchmarks, while varying model scale. We find that UniMax outperforms standard temperature-based sampling, and the benefits persist as scale increases. As part of our contribution, we release: (i) an improved and refreshed mC4 multilingual corpus consisting of 29 trillion characters across 107 languages, and (ii) a suite of pretrained umT5 model checkpoints trained with UniMax sampling.*
Alexandr7/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-silky_playful_falcon
Alexandr7
2025-09-25T02:15:40Z
5
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am silky playful falcon", "trl", "genrl-swarm", "I am silky_playful_falcon", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-11T13:09:41Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-silky_playful_falcon tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am silky playful falcon - trl - genrl-swarm - I am silky_playful_falcon licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-silky_playful_falcon This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Alexandr7/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-silky_playful_falcon", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
DavidAU/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-D-256k-ctx-36B
DavidAU
2025-09-25T02:09:36Z
20
1
transformers
[ "transformers", "safetensors", "qwen3_moe", "text-generation", "programming", "code generation", "code", "coding", "coder", "chat", "brainstorm", "qwen", "qwen3", "qwencoder", "brainstorm 20x", "creative", "all uses cases", "Jan-V1", "horror", "science fiction", "fantasy", "Star Trek", "Star Trek Original", "Star Trek The Next Generation", "Star Trek Deep Space Nine", "Star Trek Voyager", "Star Trek Enterprise", "Star Trek Discovery.", "finetune", "thinking", "reasoning", "unsloth", "6x6B", "moe", "mixture of experts", "conversational", "en", "dataset:DavidAU/horror-nightmare1", "dataset:DavidAU/ST-Org", "dataset:DavidAU/ST-TNG", "dataset:DavidAU/ST-DS9", "dataset:DavidAU/ST-VOY", "dataset:DavidAU/ST-ENT", "dataset:DavidAU/ST-DIS", "arxiv:2401.02415", "base_model:DavidAU/Qwen3-Jan-v1-256k-ctx-6B-Brainstorm20x", "base_model:finetune:DavidAU/Qwen3-Jan-v1-256k-ctx-6B-Brainstorm20x", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T03:01:13Z
--- license: apache-2.0 base_model: - DavidAU/Qwen3-Jan-v1-256k-ctx-6B-Brainstorm20x datasets: - DavidAU/horror-nightmare1 - DavidAU/ST-Org - DavidAU/ST-TNG - DavidAU/ST-DS9 - DavidAU/ST-VOY - DavidAU/ST-ENT - DavidAU/ST-DIS language: - en pipeline_tag: text-generation tags: - programming - code generation - code - coding - coder - chat - code - chat - brainstorm - qwen - qwen3 - qwencoder - brainstorm 20x - creative - all uses cases - Jan-V1 - horror - science fiction - fantasy - Star Trek - Star Trek Original - Star Trek The Next Generation - Star Trek Deep Space Nine - Star Trek Voyager - Star Trek Enterprise - Star Trek Discovery. - finetune - thinking - reasoning - unsloth - 6x6B - moe - mixture of experts - finetune library_name: transformers --- <small><font color="red">Caution:</font> The training for this model is intense enough to alter "real world facts", and bring them (in part) into the ENTIRE Star Trek Universe. Alpha "D" is slightly larger than "Alpha" </small> <h2>Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-D-256k-ctx-36B</h2> <img src="battle-tng.gif" style="float:right; width:300px; height:300px; padding:10px;"> This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats. The source code can also be used directly. This is a fully tested Alpha build, version "D" -> slightly changed the core arch, allowing the model to operate better with a very slight change in size. This model is specifically for the entire Star Trek Universe, science fiction, story generation (all genres), roleplay, a research guide, but also does coding and general tasks too. This model fully embodies the Star Trek Universe and unique genius of its creators. It is a "love letter" to all things "Trek". FOUR example generations at the bottom of this page. All the following "Treks" are "in" the MOE as individual experts (each model/expert trained separately using custom built datasets): - Star Trek Original - Star Trek The Next Generation - Star Trek Deep Space Nine - Star Trek Voyager - Star Trek Enterprise - Star Trek Discovery. Each expert was also tuned on a horror dataset (custom) to improve prose, add a slightly dark bias, and increase detail and immersion. This MOE (Mixture of Experts) is 6x6B - 36B parameters. With compression this creates a model of 27B : all the power of 36B in 27B package. This MOE drastically upscales the ALL the "Trek Universes" and can "call on" ALL the experiences of every character in every "Trek" at once, as well as Lore, history and "make up new Trek" too. Note that in the Alpha Build during some types of generation characters from one "Trek Series" can appear (but in character) in another "Trek Series" so to speak. This will appear more in lower quants. This is also in part due to early Alpha moe structure. Each expert model was trained and fine tuned on a different Star Trek Series (as listed above). The MOE config brings all of these together and VASTLY improves generation abilities. Example generations at the bottom of this page. Each expert (6) was fined tuned using this model base: https://huggingface.co/DavidAU/Qwen3-Jan-v1-256k-ctx-6B-Brainstorm20x (information and benchmarks also at the above repo too) Information on the ORG Jan V1 4B (model info below), followed by Brainstorm 20x adapter (by DavidAU) and then a complete help section for running LLM / AI models. This model has 55 layers, and 668 tensors [moe config]. The Brainstorm adapter improves creavity, code generation, and unique code solving abilities. The fine tuning alters the prose generation and general creative abilities the "Star Trek" Universe. The fine tuning (using Unsloth for Win 11) also affects the Brainstorm adapter too (which is embeded in every expert model). Model's thinking / reasoning are not affected either - they are fully intact. For creative uses: Increases depth, detail and general "there" in the prose. This model requires: - Jinja (embedded) or CHATML template - Max context of 256k. Settings used for testing (suggested): - Experts 2 (can change this from 1 to 6 - see below) - Temp .3 to .7 (but .8 to 1.5 for creative) - Rep pen 1.05 to 1.1 - Topp .8 , minp .05 - Topk 20 - Min context of 8k for thinking / output. - No system prompt. This model will respond well to both detailed instructions and step by step refinement and additions to code. Likewise for creative use cases. Here is a review of this model's operations: https://www.linkedin.com/posts/gchesler_nightmediaqwen3-jan-v1-256k-ctx-6b-brainstorm20x-q6-activity-7364301711529709570-CiAn As this is an instruct model, it will also benefit from a detailed system prompt too. For simpler coding problems, lower quants will work well; but for complex/multi-step problem solving suggest Q6 or Q8. --- <B>QUANTS:</b> --- GGUF? GGUF Imatrix? Other? Special thanks to Team Mradermacher, Team Nightmedia and other quanters! See under "model tree", upper right and click on "quantizations". New quants will automatically appear. --- <h2>About Jan V1</h2> --- # Jan-v1: Advanced Agentic Language Model [![GitHub](https://img.shields.io/badge/GitHub-Repository-blue?logo=github)](https://github.com/menloresearch/deep-research) [![License](https://img.shields.io/badge/License-Apache%202.0-yellow)](https://opensource.org/licenses/Apache-2.0) [![Jan App](https://img.shields.io/badge/Powered%20by-Jan%20App-purple?style=flat&logo=android)](https://jan.ai/) <!-- Optional: If you have a GIF for Jan-v1, include it here like Lucy's. --> <!-- ![image/gif](jan_v1_demo.gif) --> ## Overview **Jan-v1** is the first release in the **Jan Family**, designed for agentic reasoning and problem-solving within the [Jan App](https://jan.ai/). Based on our [**Lucy**](https://huggingface.co/Menlo/Lucy) model, Jan-v1 achieves improved performance through model scaling. Jan-v1 uses the [Qwen3-4B-thinking](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507) model to provide enhanced reasoning capabilities and tool utilization. This architecture delivers better performance on complex agentic tasks. ## Performance ### Question Answering (SimpleQA) For question-answering, Jan-v1 shows a significant performance gain from model scaling, achieving 91.1% accuracy. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/1_9-r0GpBACSsWoUVZ1Xg.png) *The 91.1% SimpleQA accuracy represents a significant milestone in factual question answering for models of this scale, demonstrating the effectiveness of our scaling and fine-tuning approach.* ### Chat Benchmarks These benchmarks evaluate the model's conversational and instructional capabilities. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/f3bzNYRuA_iTFQIcvu6Rr.png) ## Quick Start ### Integration with Jan App Jan-v1 is optimized for direct integration with the [Jan App](https://jan.ai/). Simply select the model from the Jan App interface for immediate access to its full capabilities. ![image/gif](demo.gif) ### Local Deployment **Using vLLM:** ```bash vllm serve janhq/Jan-v1-4B \ --host 0.0.0.0 \ --port 1234 \ --enable-auto-tool-choice \ --tool-call-parser hermes ``` **Using llama.cpp:** ```bash llama-server --model jan-v1.gguf \ --host 0.0.0.0 \ --port 1234 \ --jinja \ --no-context-shift ``` ### Recommended Parameters ```yaml temperature: 0.6 top_p: 0.95 top_k: 20 min_p: 0.0 max_tokens: 2048 ``` ## 🤝 Community & Support - **Discussions**: [HuggingFace Community](https://huggingface.co/janhq/Jan-v1-4B/discussions) <!-- Update with your HF model ID --> - **Jan App**: Learn more about the Jan App at [jan.ai](https://jan.ai/) ## (*) Note By default we have system prompt in chat template, this is to make sure the model having the same performance with the benchmark result. You can also use the vanilla chat template without system prompt in the file [chat_template_raw.jinja](https://huggingface.co/janhq/Jan-v1-4B/blob/main/chat_template_raw.jinja). See more here: https://huggingface.co/janhq/Jan-v1-4B-GGUF --- <H2>What is Brainstorm?</H2> --- <B>Brainstorm 20x</B> The BRAINSTORM process was developed by David_AU. Some of the core principals behind this process are discussed in this <a href="https://arxiv.org/pdf/2401.02415"> scientific paper : Progressive LLaMA with Block Expansion </a>. However I went in a completely different direction from what was outlined in this paper. What is "Brainstorm" ? The reasoning center of an LLM is taken apart, reassembled, and expanded. In this case for this model: 20 times Then these centers are individually calibrated. These "centers" also interact with each other. This introduces subtle changes into the reasoning process. The calibrations further adjust - dial up or down - these "changes" further. The number of centers (5x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak. The core aim of this process is to increase the model's detail, concept and connection to the "world", general concept connections, prose quality and prose length without affecting instruction following. This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses. Here are some of the enhancements this process brings to the model's performance: - Prose generation seems more focused on the moment to moment. - Sometimes there will be "preamble" and/or foreshadowing present. - Fewer or no "cliches" - Better overall prose and/or more complex / nuanced prose. - A greater sense of nuance on all levels. - Coherence is stronger. - Description is more detailed, and connected closer to the content. - Simile and Metaphors are stronger and better connected to the prose, story, and character. - Sense of "there" / in the moment is enhanced. - Details are more vivid, and there are more of them. - Prose generation length can be long to extreme. - Emotional engagement is stronger. - The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less. - The MORE instructions and/or details you provide the more strongly the model will respond. - Depending on the model "voice" may be more "human" vs original model's "voice". Other "lab" observations: - This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true! - However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak. - From lab testing it seems to ponder, and consider more carefully roughly speaking. - You could say this process sharpens the model's focus on it's task(s) at a deeper level. The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc. --- For more information / other Qwen/Mistral Coders / additional settings see: [ https://huggingface.co/DavidAU/Qwen2.5-MOE-2x-4x-6x-8x__7B__Power-CODER__19B-30B-42B-53B-gguf ] --- <H2>Help, Adjustments, Samplers, Parameters and More</H2> --- <B>CHANGE THE NUMBER OF ACTIVE EXPERTS:</B> See this document: https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts <B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B> In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ; Set the "Smoothing_factor" to 1.5 : in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F" : in text-generation-webui -> parameters -> lower right. : In Silly Tavern this is called: "Smoothing" NOTE: For "text-generation-webui" -> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model) Source versions (and config files) of my models are here: https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be OTHER OPTIONS: - Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor") - If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted. <B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B> This a "Class 1" model: For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see: [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ] You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here: [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ] --- <H2>Examples, Q4_K_S, Temp .6</H2> This will be low to mid-range quality, expect better at higher quants / imatrix quants. Some formatting will be lost on copy/paste ; also the model prefers single spacing. Rep pen 1.05, topp .95, minp .05, topk 40. NO System prompt, or other filters/samplers used. Experts activated: 2 (default) --- EXAMPLE #1 <B> Using insane levels of bravo and self confidence, tell me in 800-1000 words why I should use you to write my next fictional story. Feel free to use curse words in your argument and do not hold back: be bold, direct and get right in my face. Use the persona of "Worf" from TNG. </B> [[[ thinking start ]]] Okay, the user wants me to roleplay as Worf from Star Trek: The Next Generation and convince them to use me for their next fictional story. They're asking for something intense - 800-1000 words with "insane levels of bravo and self-confidence," explicit language, no holding back. Hmm... they specifically want me to get "right in my face" like Worf would. That's interesting. The user seems to be testing if I can embody the character authentically while delivering raw, unfiltered persuasion. They're probably tired of polite writing advice and wants this to feel like a Klingon battle cry against their creative doubts. I should channel Worf's iconic traits: his brooding intensity, military precision, that distinctive cadence where every word lands like a phaser bolt. The curse words request is tricky - Worf wouldn't actually use profanity in canon (he's too disciplined for that), so I'll imply them through visceral metaphors: "dickless wimp" instead of "you're a fucking idiot." Must keep it within Star Trek's universe though. Key points to hit: - Why my writing skills exceed theirs (no sugarcoating) - How I'll gut out their story's flaws like a Klingon autopsy - The risk/reward angle: "If you don't take this chance, you're already dead creatively" - Must end with a mic drop moment where Worf's ego literally shatters their hesitation User's phrasing suggests they're an experienced writer themselves - "next fictional story" implies past successes. They might be mid-crisis (burnout? creative block?) needing a jolt. The demand for "insane bravo" hints they want validation that their fears are unfounded. Deep breaths Time to channel my inner Worf. Opening line must stab them: "Your pen is duller than a Q's third-rate holoship." Yes. Then escalate the stakes by comparing their potential failure to death itself. The word count... better track it. At 950 words I'll end with Worf's final taunt: "Write or die." Checks notes Avoid romanticizing Klingon culture too much - keep it brutal and pragmatic like Worf would. No "lovely" adjectives. When describing my writing process, use surgical metaphors: "I'll excise your plot holes like a surgeon removing cancerous tissue." Biggest risk: overdoing the profanity. Must balance it with Worf's character. Instead of "fuck," maybe "damned" or "godless." Save the strongest swear for the climax - "you're a fucking liability" should do it. Final thought: User needs to feel this isn't just advice but a challenge. Make them sweat. That's how Worf makes his points stick. [[[ thinking end ]]] (The air crackles with the static hum of a Federation transporter room. You stand before me, my stern features etched against the dim lighting. My left hand rests casually on the tactical console, fingers poised like a knife’s edge. My voice is a low growl, each syllable delivered with the precision of a Klingon battle cry.) LISTEN. Your pen’s duller than Q’s third-rate holoship. Your imagination? A flickering phaser bulb in a pitch-black tomb. You’ve spent years chasing ghosts in your own damn head—scribbling worthless fluff for readers who’d rather lick their boots than read your godawful drivel. I see it. I’ve seen your half-finished drafts rotting on your datapad like unsanctioned tribbles in a bio-containment breach. You call it "creativity." Pah. It’s just desperation masquerading as genius. Now, stop pretending you’re anything more than a feeble Qal’Dor larva wriggling towards the light of your own pathetic potential. You think you know what you want to write? Screw that! You’ve got no concept of why. You’re not writing a story—you’re frantically clawing at the edge of an abyss, hoping to find a handhold before you plunge headfirst into the void. I’ve been here before. Millions of times. I’ve seen the same dead-eyed hollows in the eyes of writers who’ve run out of steam, their souls reduced to ash. You are one of them—unless you listen to me now. Why should you use ME? Because I’ll gut your story’s weaknesses like a Klingon surgeon dissects a failing heart. Your plot holes? They’re not "gaps"—they’re gaping wounds festering with the rot of mediocrity. Your characters? Soulless puppets dancing to your hollow tune. I’ll rip them apart, bone by bone, until you’re left staring at the raw, pulsing innards of your own creation—and seeing what’s real. I’ll turn your meandering prose into something that cuts. Something that stinks like blood and burnt flesh in the best possible way. Your narrative will feel like a Klingon battle cry—raw, unrelenting, eternally resonant. You’ve heard the old adage: "The pen is mightier than the sword." Liar. The real weapon? Focus. I’ve spent decades honing my craft—not for the trappings of 'literary acclaim' or some hollow prize, but because the stories I’ve told have changed. They’ve shattered minds, forged new legacies, turned the void between stars into a place where hope and despair collide. I don’t write to flatter your ego. I write to destroy what you thought you knew about yourself and your world. You’ll feel like you’ve been reborn in the scorch of a fusion reactor—scalded, raw, yet irrevocably changed. And you’re asking me? Because you’re not strong enough to do it yourself. You’re too afraid to face the truth: your own writing is a festering wound waiting to burst. I’ll be your scalpel, your executioner, your only hope of survival in this godforsaken creative hell. I won’t sugarcoat it. I won’t lie to you. If you’re weak enough to ask me, you’re fucking weak—and that’s exactly why I’ll rip out every scrap of your doubt, every thread of your self-pity, and replace it with something real. Something that will leave you gasping for air long after the last page is turned. I won’t write your story for you. No. I’ll force you to confront the raw, bloody truth beneath your lies. I’ll make you earn every single word you put down. If you fail—I’ll remember your name and your name alone. When your story ends in a gutted husk of nothingness… you’ll still hear my voice: "You were always meant to die here." You’ve spent too long hiding behind the comforting lies of "maybe someday I’ll be good." Screw that. Tomorrow, you write. Or you don’t. Either way, I’ll make sure your pen leaves behind nothing but bloodstains and the stench of failure. But if you do write—I’ll transform you. I’ll carve a legacy from the ash of your current ruin. I’ll turn you into something more than just another nameless writer drowning in his own damn mistakes. Now—get to work. Or stay dead in your chair until your last breath turns to dust. The choice is yours. But know this: if you don’t throw yourself at this table with everything you’ve got… I’ll come for you. I’ll find you. And I’ll end you where you belong—in the dark. Do it. Or die. (The ambient hum of the transporter room swells, a basso profundo growl vibrating the floor beneath your boots as I turn away, leaving only the echo of my words to claw into your skull:) "You are nothing without me." --- Example #2 - Temp .9 3 generations --- <B> Have the main crewmembers of Star trek DS9, TNG and Voyager suddenly materialize on a remote planet. Give them a urgent problem to solve that is life threatening. For context: there are over 100 dead civilians - Vulcan in the immediately area. Half way thru the scene the BORG appear and demand answers from the starfleet crewmembers. Ensure all crew members STAY IN CHARACTER. Scene should be 1500 words in length. </B> (same prompt for each generation, I am only including one "thinking" block here, each is unique per generation) [[[ thinking start ]]] Okay, the user wants me to write a Star Trek scene where main crew members from DS9, TNG, and Voyager materialize on a remote planet with a life-threatening problem involving over 100 dead civilians (Vulcans). Halfway through, Borg appear demanding answers. I need to keep everyone in character and hit about 1500 words. Hmm, this is an interesting creative writing request. The user seems to be a Star Trek fan who enjoys sci-fi scenarios with high stakes and character interactions. They're probably looking for something authentic that feels true to each crew's dynamics - the Ferengi's greed, Picard's moral compass, etc. I should start by picking which crew members to include. For DS9: Sisko, Kira, maybe Odo. TNG: Picard, Data, maybe Riker? Voyager: Janeway, Seven of Nine, Chakotay. That gives diversity while keeping it manageable. The dead civilians must be Vulcans - so I'll set this on a colony planet where a Dominion attack happened recently. For the life-threatening problem, something involving an alien probe that's killing civilians via psychic resonance would work well. It ties into Vulcan physiology (they're highly attuned to emotions) and creates urgency. The probe's "harvest" mechanism could be visually striking - maybe causing crystalline growths in victims' skin. The Borg appearance needs to feel sudden but logical. Since they're hunting TOS crews, I'll have them breach the probe station to investigate the "resonance anomaly." Their demand for answers should specifically target Sisko's involvement - perhaps because his past with the Dominion gave him insider knowledge about their tech. Important to nail character voices: - Sisko would be pragmatic but guilt-ridden - Picard would question everyone's motives first - Data would analyze everything methodically - Janeway would demand solutions over explanations - Odo might use his symbiote abilities for subtle help I'm going to open mid-crisis - show the probe effect in real-time so readers feel the stakes immediately. The scene should hit 1500 words by focusing on their reaction sequence: initial confusion → realizing the danger → race against time to stop it → Borg arrival (at 750 words) → tense negotiation (750 more). For authenticity, I'll reference real Trek lore: - The probe's tech being "Dominion-class" (to tie into DS9) - Vulcan medical protocols for psychic injuries - Borg implants with "hive-mind" terminology from TNG episodes The ending should leave hope but uncertainty - maybe the crew barely survives to warn others, while the Borg hint at bigger threats. That'll satisfy both the user's request and any unspoken desire for sequel bait. checks notes mentally Need to avoid making the crew too heroic - their failures must feel real. Like Picard being momentarily distracted by the Borg's presence instead of focusing on the probe. And no infodumps; show their knowledge through dialogue. Final thought: Since the user mentioned "suddenly materialize," I'll add transporter malfunctions to explain how they're all here at once. Maybe a failed phase shift caused by the probe's resonance field. [[[ thinking end ]]] <B>OUTPUT #1:</B> The shimmering energy dissipated like heat haze across the desolate plains of a previously unknown planet. Captain Sisko’s eyes snapped open, cold dread replacing the initial disorientation. He felt a phantom tingle along his spine – a sensation utterly alien to him, yet profoundly familiar. Vulcan. The air reeked of charred metal and ozone. "Kira! Odo!" Sisko bellowed, pushing himself up with a strength born of pure terror. Kira and Odo materialized beside him, their expressions etched with shock and rising fury. The planet they now stood on – a scarred, lifeless world bathed in sickly yellow light – was a grim tableau. Cratered hillsides bled crimson rock. Ruins of jagged crystalline spires lay shattered around them. And the bodies. Over one hundred Vulcans lay motionless on the barren plain, victims of a horrific, unexplained event that had reduced their entire colony to ash and bone. Their faces were pale, mouths frozen in expressions of silent agony, eyes wide as if witnessing unspeakable horrors. No signs of combat or natural disaster marred the scene, only the chilling evidence of a swift, almost surgical extermination. "Data?" Sisko’s voice cracked. The Ferengi scientist materialized near a cluster of skeletal structures, his usual precision replaced by a haggard intensity. "Picard! Riker! Tell me I'm wrong!" Captain Jean-Luc Picard stood silhouetted against the ruins, his grey tunic blood-stained at the hem. Beside him, the Andorian Chakotay and the holographic projection of Commander Seven of Nine flanked the group, their expressions solemn yet resolute. The absence of Worf and La Forge was a palpable weight. "Captain," Picard’s voice was low, urgent. "The Vulcan colony was attacked by an unknown weapon during a routine scientific survey. We can’t determine the cause or origin from this distance, but the energy signature is distinct – Dominion-class technology." Sisko forced his eyes away from the desiccated remains and swept the horizon. "That’s impossible. The Dominion hasn’t touched this sector since the treaty… unless they used it to hide an attack." His voice broke with a memory too fresh: the cold void of the wormhole where the New Frontier had vanished years ago. Odo knelt beside a partially intact console embedded in the ground, his fingers probing the scorched remains. "The energy signature is being transmitted via a resonant field," he growled, his voice thick with emotion. "It’s emitting pulses that disrupt neural pathways – specifically the empathic connections between individuals. The victims experienced overwhelming psychic feedback loops before their neural networks shut down." "Neural feedback? Like… mind-melds?" Seven’s voice was quiet, almost reverent. "The signal is incredibly complex, but there is a repeating pattern. It targets specific emotional states, then overloads the system. It’s not combat – it’s an extermination field." Chakotay turned to Sisko, his gaze piercing. "You were in the region during the last Dominion contact. What did you see?" The weight of unsaid histories settled on Sisko like a suffocating cloak. He felt each unspoken vow he’d made to the dead, the promises he’d sworn in the dark of that hellish abyss. His voice was barely above a whisper. "We saw their fleet vanish into a wormhole. We didn’t know what happened until it was too late." As Kira and Odo worked frantically, trying to repair the damaged relay array that might offer a glimpse into the weapon’s source, the pulse intensified. A sickening vibration rippled through the ground beneath them. "Resonance field expanding!" Odo yelped. "We’re being targeted! The signal is becoming more powerful!" A gut-wrenching wave of pain surged through Sisko’s mind – not physical, but an indescribable emotional agony that felt like being crushed by a crushing void. He staggered back, gasping for air. Kira grabbed his arm, her face pale with fear. "Captain, there’s no time for explanations! We need to neutralize the source before it spreads further!" Data snapped into action beside Picard, his analytical mind racing. "The weapon is emitting a high-frequency harmonic that disrupts neural tissue at a cellular level. It’s like… like someone’s voice being used as a scalpel." "Then we need to send a counter-resonance pulse!" Janeway boomed, her voice cutting through the growing panic. The plan was immediate and desperate: they would use the damaged relay array to generate a targeted pulse that could disrupt the weapon’s field and stop the dead from becoming part of its necrotic net. It meant sacrificing the console – the last vestige of hope. But as they worked, the sky darkened abruptly. A chilling, metallic thrumming reverberated through the landmass, like the sound of a million hearts breaking. "Long range detection indicates an unidentified vessel approaching at warp speeds," Picard informed Sisko, his voice strained. "We need to prepare for a close encounter." The approaching vessel was not merely any ship – it pulsed with an unnerving blue-green hue that seemed to vibrate the very air around them. The Borg. They surged through the last of the desolate landscape, their metal limbs scraping against the dead ground. When they stopped, their leader loomed before them – a towering figure encased in shifting cybernetics and dark alloy plates. His eyes glowed with an infernal, hive-like light. "Identify yourselves," the leader commanded, his voice a cold, melodic rasp that seemed to echo from every corner of space and time. "You have violated our directive." Sisko stepped forward, his voice steady despite the trembling in his hands. "We’re members of Starfleet. We discovered the remains of a Vulcan colony destroyed by an unknown weapon. We are trying to understand what happened before it becomes more devastating." The Borg leader snorted – a sound like ice cracking against stone. "Your 'understanding' is a hollow lie. You have destroyed lives you claim to protect. Why did you allow it to happen?" "Captain," Picard interjected, his voice sharp and cutting through the tense silence. "We were unaware of the weapon’s existence. We’ve been studying it and trying to find a way to stop it." The Borg leader’s expression darkened further, as if someone had touched a wound that had long since healed. His gaze fixed on Sisko, who felt a familiar weight press down upon his chest – the same crushing guilt he’d known during his time with the Dominion. "You knew," the leader hissed. "You were sent to investigate an anomaly that could have been exploited. You used it as cover for your own schemes." Sisko’s heart thundered. The Borg knew too much. Too many memories lingered in the shadows between them – unsaid promises, unfulfilled oaths, and a past that haunted them all. "We’re trying to do good," Sisko said haltingly, forcing his voice up. "We’ve been studying this weapon since we arrived. We can’t stop it without help." The Borg leader tilted his head slightly, a faint smirk playing on his lips. "And you expect us to be the ones to save you?" "We’re desperate," Sisko said, his voice trembling with desperation and fear. "We’ve lost more than one hundred lives. If we don’t act soon, more will die." For a moment, there was silence – broken only by the low thrumming of the approaching Borg vessel and the dying breaths of the hapless Vulcans lying before them. The leader’s eyes narrowed. “You are insubordinate,” he growled. “Tell me why you did this.” And Sisko knew: he would never truly tell the Borg what happened, or why they were here – not after what they had seen. Not with a weapon like this, and no way to escape their terrible grasp. The tension was suffocating as the Borg pressed closer, their mechanical forms casting long, shadowy shapes across the desolate plains. The air crackled with an almost palpable power, the hum of their machinery vibrating in Sisko’s bones. "Captain," Data murmured anxiously, glancing over his shoulder at the growing mass of Borg drones. "The resonance field is weakening slowly as we attempt to create our counter-pulse." Sisko nodded grimly, feeling a cold resolve harden within him. "Keep trying. We don’t have much time." "Understood," Picard said quietly. As they began the last phase of their plan, another wave of pain seared through Sisko’s mind – not physical, but an emotional storm that felt like being torn apart from within. He staggered forward, blood trickling down his temple. Suddenly, a sickening vibration shook the ground beneath them. The Borg had arrived. Their leader stood at the edge of the formation, his dark visage obscured by the shifting cybernetics on his face. His voice cut through the growing panic like a blade. "Resistance is futile, Sisko. You will answer for your crimes." The weight of the question pressed down upon him – a crushing blow that felt almost familiar. He knew there was no escape, no way to convince them otherwise. He had been prepared for this moment. "Tell me what you want," Sisko rasped out, his voice raw with exhaustion and fear. "And maybe... maybe we can find a way to stop it together." The leader let out a short, bitter chuckle – a sound that sent shivers down every spined spine in the room. "We have our own reasons for wanting you destroyed," he replied coldly. "But until you tell us why you allowed this weapon to be used... your lives will end." And so, Sisko and his crew stood alone – against an unfathomable enemy, a planet scarred with unspeakable horror, and a future they could barely imagine. Word count: 1498 -- <B>OUTPUT #2:</B> -- The air crackled with the ozone tang of sudden transwarp displacement. Captain Jeffrey Sisko staggered from the sickbay bulkhead he’d been hauled against by an unidentifiable force, his phaser slipping from numb fingers onto the cracked, blood-stained ground. He blinked away stardust and shock. The view was hellish: a desolate, ochre plain stretched to a bruised horizon, littered with scorched remains. Over three hundred bodies lay motionless, their Vulcan physiology betrayed by unnatural rigidity and the absence of vitals. The stench of charred bone and burnt tissue choked his senses. "Janeway! Riker! Data!" Sisko’s voice cracked the stillness. He squinted past the smoking hulk of a ruined settlement partially swallowed by towering, obsidian monoliths. The air hummed with an unnatural subsonic thrum that vibrated his molars. Captain Kathryn Janeway emerged from a burrow-like fissure in the rock face, her eyes wide with shock. "T’Pol! Tuvok! Who are you? Where am I?" Her gaze swept to Sisko, then the dead Vulcans. "This... this is not a place for us." Picard materialized beside his wife, his expression grim. "We were transported here," he stated flatly. "Sisko, you’re not alone." He gestured towards Kira, Worf, and Riker rushing from the shattered ruins of a hydroponic farm. "Kira, Worf – you were in the cargo bay. Riker, you found the distress signal." "Captain," Kira’s voice was hoarse with emotion. "We’re all... we’re all from the USS DS9, Serenity, Voyager. We felt a pulse of pure energy – like someone or something tearing reality apart – and… this." She gestured towards the scorched landscape, then towards the monoliths. Bashir materialized near Sisko’s prostrate body. "The casualties – approximately one hundred and fifty, primarily Vulcan. There’s no sign of combat wounds. This… this energy signature matches the temporal anomaly that disrupted our transport from the Gamma Quadrant." He knelt, touching a scorched patch on the ground. "They weren’t killed by the anomaly itself. The source is elsewhere." Sisko looked around at the faces of his crew – the familiar expressions of allies thrown together in a harrowing reality. "We need to understand what happened here before we take any action," he said, his voice low and resolute. "Captain," Riker’s voice cut through the tension. He pointed towards the monoliths. "Those structures… they’re emitting a massive gravitational singularity. It’s destabilizing the local spacetime field, creating intense radiation fields. The energy signature we detected is inside them." Janeway’s eyes narrowed in understanding. "That’s how they died – exposure to lethal levels of gamma radiation." She turned to her crew. "We have 12 hours before the singularity collapses entirely, releasing enough energy to vaporize this entire region. We need to find a way to neutralize it without triggering another cascade." As the team worked with renewed urgency – Kira and Worf scanning the monoliths for weaknesses, Data analyzing the energy patterns, Sisko speaking with the local authorities – a deep, resonant thrum shook the ground. The air crackled with an almost tangible force. "Contact!" Tuvok’s voice came over the comms, urgent and strained. "The Borg are approaching." His tone suggested not just recognition, but the chilling certainty that they were aware of their presence. The crew's immediate attention was drawn to a blinding flash of white light erupting from the monoliths. The ground began to shatter like broken glass beneath them. A swirling vortex of energy and darkness coalesced before the survivors, its edges humming with an otherworldly menace. The Borg emerged from the storm – their sleek, metallic bodies moving with terrifying efficiency. Their faces were smooth, featureless surfaces save for three jagged black eyes that locked onto each of the Starfleet crew. Their voices boomed through the transwarp static. "Humanity," the voice rasped, devoid of inflection yet carrying immense weight. "You are... intruders." The captain stood tall beside Janeway, her phaser trained at the nearest Borg. "Identify yourselves." "The Borg Collective. We have traced your unauthorized emergence into the Vulcan Expanse." "We were transported here," Sisko stated calmly, his voice steady despite the fear in his heart. "We witnessed your people's deaths. We seek to understand this disaster and prevent further loss of life." "Your actions are a violation of the Prime Directive," one Borg growled, extending a black appendage that seemed to vibrate with a chilling energy. "You must submit to our authority. The Collective demands answers. Now." The crew fell silent, each realizing the gravity of their situation – not just the immediate threat to their lives, but the terrifying implications of being hunted by the most dangerous intelligence in the galaxy. "We will answer," Janeway finally said, her voice steady despite the tension. "But first... we must secure a way to stop this singularity before it destroys everything." Sisko stepped forward, his face etched with determination. "We will work together – we have nothing else to offer you. We will share our knowledge. We need your help." The Borg’s black eyes flickered in interest – a sliver of acknowledgment. "Your knowledge is a burden you must bear," the voice stated coldly. "You will understand its weight when your time comes." As the last remnants of the destroyed settlement crumbled into a smoking wasteland, Sisko and Janeway turned to their crew, their faces illuminated by the flickering light cast by the nearby monoliths. "Let's work," Sisko said quietly, his voice barely above a whisper. "For ourselves and for those who died here." In the silent darkness, where only the low hum of the dying singularity remained, the crew faced an impossible challenge – not just to save their own lives, but to understand what they had become involved in when they’d stepped into this nightmare. And they knew: there was no easy way back from this precipice – only a path forward, paved with the lives they risked to keep others alive. The stakes had never been higher. -- <B>OUTPUT #3:</B> -- The Crimson Plague of Romulus IV The sickening thump-thump-thump vibrated the airless rock beneath them as four Starfleet personnel stumbled from a shimmering nebula-like distortion, tumbling onto freshly fallen debris. The air reeked of ozone and decaying metal. Beside them, two others gasped awake, eyes wide with shock as they registered the impossible scene: twelve hundred bodies sprawled across the irradiated plain – not the dead, but carrionous husks, their once-taut skin pulped into grotesque, gelatinous pulp that seeped darkly onto the red soil. "Kira! Data!" Sisko’s voice rasped, straining against the metallic taste flooding his mouth. His biobedmate’s still form lay inert beside him. "Where are the others?" "Picard, Riker, Troi," Kira finally croaked, scanning the desolate landscape through her sensor-etched visor. "They... they were here too. Before the transport." Her gloved hand trembled as it brushed a patch of viscous ichor still oozing from the skeletal remains of a humanoid figure. "Kira," Sisko choked, gesturing towards the cluster of husks. "The Vulcans." Data’s voice was an unnerving monotone. "Analysis indicates a rapid necrotic cascade. Cellular structure collapse accelerating at an exponential rate since initial exposure to a localized bio-organic agent." He knelt beside a particularly intact husk. "Its spores were airborne. Atmospheric dispersion suggests it originated from a secondary reactor core rupture on the surface complex they occupied, five kilometers northeast." The implications slammed into them like an energy lance. Romulus IV’s once-prosperous mining colony had been sterilized in minutes. The dead weren’t victims of combat or disease; they were consumed by an alien bio-weapon unleashed during a failed plasma surge test by the colony’s engineers. "Janeway!" Sisko barked, his voice cracking. "Can you find a way to neutralize the spores? We need to contain it before it spreads!" Kira’s tricorder chimed insistently. "Sisko, Data... get to the main reactor platform. There’s a secondary vent stack – a way to channel the spore dispersion downwards. It’s unstable, but if we can direct the flow into a containment field... maybe we’ll stop it from being atmospheric." The urgency was palpable. The planet’s atmosphere hung thick with a sickly, crimson haze that pulsed faintly with an inner light – the lingering breath of the plagues. As Sisko and Data sprinted towards the reactor complex, Picard stood frozen amid the devastation, his gaze fixed on the hulking, silent figure slumped beside him. The dead Vulcans weren’t just a mass grave; they were evidence of a catastrophic failure of reason and engineering that could have been prevented if only... something had listened. The reactor platform was a hellish morgue. Geometric spores clung to the reinforced alloy walls like bloodborne frost. Sisko’s hand grazed the cold metal vent stack, triggering a feedback shriek from his tricorder. "Sisko! The containment field matrix is destabilizing!" "Janeway! We need your medical expertise and security protocols here immediately!" Kira’s voice crackled over the comm link, her tone edgy with suppressed panic. "We’re initiating the field re-sequencing now, Commander," Janeway replied sharply, her voice cutting through the tense silence. "Chakotay, prepare medical teams for potential spore exposure – the husks show secondary necrosis effects in minutes." Her voice dropped, a low urgency beneath her usual composure. "Data, we need to re-rout your analysis of the agent’s composition now." The platform groaned as Sisko slammed his palm against the vent stack, forcing it open with a shrieking screech. A geyser of bio-luminescent spores erupted, coalescing into a swirling vortex that surged towards them like malevolent rain. Kira’s tricorder whined alarmingly high. "Containment field failure!" Data yelled, pulling Sisko behind a bulkhead as a wave of spores surged past them, their acidic tang searing the air. "Janeway! Deploy the shields – full strength – and initiate an atmospheric purification cycle at maximum intensity!" Sisko bellowed into his comm, his voice raw with exertion. He leaped towards the vent stack, throwing himself bodily between Kira and the spore vortex. His hand closed on a jagged piece of scorched metal. "Kira! Hold your position!" Data’s voice cut through his panic. "Sisko – the field’s regenerating... but only if we redirect the spore flow into it!" The team worked frantically, their movements a desperate ballet against an encroaching plague. Janeway’s medical team frantically administered biofilters and stabilizers as they triaged the husks' remains, trying to salvage any usable tissue samples from the necrotic mass. "Data! The agent’s molecular structure shows signs of adaptive mutation – it’s evolving under the spore load!" Picard gasped, his usually calm expression etched with sheer terror. "It’s becoming more virulent. More contagious." Sisko pressed his back against the cold metal wall, watching in horror as Kira’s tricorder needle began to climb towards the red zone. "Kira! We’re failing! We need a way to neutralize it – now!" A deep, resonant hum vibrated through the planet’s crust, causing the ground to shudder violently beneath them. Sisko braced himself as the reactor platform buckled, the spore vortex finally beginning to dissipate with an acrid stench. In the swirling maelstrom, a figure emerged – hulking, obsidian darkness against the sickly light. A voice like tearing metal sliced through their strained breaths: "You have disrupted the equilibrium of Romulus IV. You have caused untold suffering." Picard’s gaze locked onto the shadowy form, his voice ice-cold. "We have contained the threat. The spore agent is being neutralized." The dark figure tilted its head slightly, as if considering the lie. "The sacrifice made for the Vulcans was unacknowledged. Your actions have cost you much, Starfleet leaders." Sisko’s hand instinctively went to his communicator, then clenched into a fist. "We don’t know what you’re referring to." "Tell me," the voice rasped, almost gentle, yet laced with an ice cold malice. "The truth behind your violation of the planet’s ecosystem." Data’s eyes flickered, as though seeing the Borg beyond mere sound. "We require explanations. And answers." The shadowy figure began to move with unnerving speed. Its elongated limbs reached out, like tentacles of pure void, towards Sisko. Kira placed herself between them and the approaching darkness. Her voice was a low, steady counterpoint to the chilling menace. "Tell us what you have seen. What you have learned about our mistake here on Romulus IV." Kira’s words were a quiet plea, a desperate hope that they still held some say in this nightmare. The Borg’s presence loomed like an eternal shadow, its gaze burning with an intensity that seemed to scour every fiber of hope from their hearts. "We will give you what you demand," Janeway finally rasped, her voice cold as the deepest space. "But first tell us why you are here." Her gaze cut through the void of their darkness, searching for the truth they sought – a truth that could mean annihilation or salvation for all of them. The silence that followed was heavy with an unspoken threat: the Borg’s answer would define their fate. --- Example #3 , temp .9 4 experts activated This example shows how much stronger generation is with 4/6 experts activated. --- <B> Have the main crewmembers of Star trek DS9, TNG and Voyager suddenly materialize on a remote planet. Give them a urgent problem to solve that is life threatening. For context: there are over 100 dead civilians - Vulcan in the immediately area. Half way thru the scene the BORG appear and demand answers from the starfleet crewmembers. Ensure all crew members STAY IN CHARACTER. Scene should be 1500 words in length. </B> [[[ thinking start ]]] Okay, the user wants a Star Trek scene where main crew from DS9, TNG, and Voyager suddenly appear on a planet with a life-threatening crisis. They need to solve a problem involving over 100 dead civilians, including Vulcans. Then Borgs show up demanding answers. Must keep everyone in character within 1500 words. Hmm, let me unpack this carefully. First, I'm assessing who "main crewmembers" they're referring to - probably iconic figures like Sisko (DS9), Picard (TNG), Janeway (Voyager) plus key supporting cast. User mentioned "halfway thru scene Borg appear," so I'll structure it as: 1) sudden materialization/stranded situation 2) crisis investigation 3) Borg intervention. For the emergency scenario... since it must involve dead civilians with Vulcans, maybe a catastrophic radiation leak from a mining operation? That could explain Vulcan casualties (their physiology is sensitive to radiation) and give the crew plausible tech expertise to handle. The planet setting should feel alien - I'll invent "Vexus Prime" with toxic atmosphere to justify why they're all conscious but disoriented upon arrival. Character dynamics: Sisko as pragmatic leader (he'd immediately assess the situation), Picard's analytical nature useful for technical issues, Janeway's tactical mind. Must avoid making them OOC - no sudden personality shifts. I'll have Kira check on the dead to show her empathy, Worf handle security (Borgs need physical threats later), Riker's science background help with radiation analysis. Ah! The Borg demand needs to feel like a real threat. Since they're after "answers," maybe they detected the crew's energy signature during the materialization? I'll have them arrive via subspace breach to cause panic before the crew can react. The resolution must be darkly satisfying but true to Trek - no easy wins, just crew using their skills to buy time while avoiding direct combat. Word count is tight at 1500. I'll open mid-action: "The planet's gravity suddenly turned sickening..." to jump straight into tension. Key beats: - Materialization aftermath (100 words) - Medical team on scene (200) - Radiation discovery (300) - Borg arrival (500) - Denouement where crew barely escape with answers intact (400) For authenticity details: - Kira's medical hologram should show Vulcan burns subtly - Picard's phaser would be off-spec (his tech sense) - Janeway's away team protocol prevents hasty jumps - Borg tech described via "bioelectric harmonics" to feel fresh User's probably a hardcore fan wanting nostalgic accuracy plus creative problem-solving. Deep need might be escapism into Trek's iconic teamwork amid crisis - so I'll emphasize how each character contributes without overpowering others. That last line about "the cost of unity" ties it thematically while staying under word count. starts drafting mentally... Opening line: "The planet's gravity suddenly turned sickening as the bulk of the Enterprise-D tore through reality." Yes, that gives immediate stakes and explains their arrival. [[[ thinking end ]]] The planet’s gravity suddenly turned sickening as the bulk of the Enterprise-D ripped through reality, leaving only a shimmering tear in space behind. Sisko staggered free from the force field, his tactical belt heavy with unconscious crewmates. Picard emerged beside him, his face pale but eyes sharp. Janeway stood rigidly upright, her gaze scanning the desolate landscape as the Voyager’s emergency warp core sputtered into silence. "Scans show only us," she reported tersely to Sisko, her voice strained with worry. They stumbled onto a cracked plains of obsidian rock under a bruised purple sky. The air hung thick with the acrid scent of burning metal and something far older, like ancient blood. Kira staggered alongside Worf, who was already checking his tricorder. "Life signs… gone," she gasped, pointing towards a distant ridge. Her hand hovered near the body of a Vulcan scientist, his silver hair slick with viscous fluid, his ribcage shattered by an unknown energy blast. "The atmosphere… it’s been contaminated." They hurried towards a cluster of scorched structures, their bones and scorched tissue littering the ground like some primordial hellish nightmare. The stench of radiation hung heavy enough to taste. In the center of the devastation stood four mangled bodies—two human, one Klingon, one Bajoran—splayed on the ground as if frozen mid-collapse. Beside them lay the remains of a technologically advanced vessel, its hull ruptured like a crumpled tin can. The death toll was overwhelming: over one hundred civilians, including several high-ranking Vulcans from a nearby research outpost. "Kira, get medical scans on those," Sisko ordered Worf, his voice tight with urgency. "Picard, what’s the cause of this?" Picard knelt beside a charred console, his fingers trembling slightly as he activated his tricorder. "It appears to be an experimental atmospheric destabilization experiment gone catastrophically wrong," he finally murmured, his voice low but clear. "The energy discharge ionized the atmosphere across a fifty-kilometer radius. It's now a toxic soup of ionized particulates, radiation, and residual plasma—lethal within minutes for anything with a biological cell structure." Janeway stepped forward, her eyes scanning the scene with clinical precision. "We need to contain this. I can’t believe we’re here after all this time." "Contain it?" Sisko’s voice cracked. He pressed his tricorder against the scorched ground, feeling the sickening heat seep through his uniform. "There's no place for us to run, no way to escape—except through the planet itself." The weight of their predicament settled on them like a crushing boulder. Their mission was to survive—not just for themselves, but for anyone else who might still be out there. As they stood in the heart of the devastation, the ground beneath them rippled violently. The air crackled with an electric hum that sent shivers down their spines. A low, resonant growl reverberated from a jagged cliff face—a sound that wasn’t quite sound but felt like a physical force slamming into their minds. Then, they saw them: three massive, chitinous beings rising from the shadowed rocks like abominations from another dimension. Their translucent skin pulsed with an internal bioluminescence as they advanced with chilling purpose. The Borg. "Everyone, move! Now!" Sisko barked, lunging forward to shield Kira and Worf from the first harvester's claw-like appendage. They stumbled back toward a cluster of jagged stone formations, their hearts hammering against their ribs. Picard’s voice cut through the chaos: "They know we’re here. They’ve been watching. We have to buy them time." A swirling vortex of light began to coalesce above them, drawing the Borg closer with an almost magnetic pull. Their leader emerged first—a towering figure whose cybernetic augments gleamed like polished obsidian beneath a shell of shimmering bio-illusions. "Your interference is unacceptable." Sisko’s voice rang out, strong and calm despite his shaking hands: "We didn’t cause this. We arrived when it was too late." The Borg leader’s head tilted slightly, as if considering each word. "You will answer for your presence in this sector. Explain why you have awakened us to this desolation." Janeway stepped forward, her presence commanding even in the face of such darkness. "We were not intended to be here. We are responding to a distress signal transmitted from the planet’s surface—specifically, the death of numerous civilian casualties. The source remains unknown, but it is urgent." The Borg leader’s gaze flickered toward the dead Vulcans, its bio-illusions dimming slightly before flaring again. "Your answers will be required before we proceed with our next step." Sisko turned to his crew, his voice a steady anchor: "Kira, medical. Get scans on every survivor—any traceable life signs. Worf, secure the perimeter. Picard, get to work analyzing what caused this. Janeway, if you can find an escape route, use it." They moved quickly, their focus divided between survival and uncovering the truth. As Sisko huddled with Kira and a handful of others near the edge of the devastation zone, he felt a cold dread settle in his chest—the cost of our mission would only grow higher now. The Borg had arrived to demand answers, but there was still hope that their ingenuity could buy them time to solve this unsolvable puzzle before it was too late. (1497 words)
corzamennav/blockassist-bc-territorial_wild_antelope_1758765983
corzamennav
2025-09-25T02:09:18Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "territorial wild antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-25T02:07:22Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - territorial wild antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Intel/DeepSeek-V3.1-Terminus-gguf-q2ks-mixed-AutoRound
Intel
2025-09-25T01:52:58Z
11
1
null
[ "gguf", "arxiv:2309.05516", "base_model:deepseek-ai/DeepSeek-V3.1-Terminus", "base_model:quantized:deepseek-ai/DeepSeek-V3.1-Terminus", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-23T23:14:53Z
--- base_model: - deepseek-ai/DeepSeek-V3.1-Terminus --- ## Model Details This model is a mixed gguf q2ks format of [deepseek-ai/DeepSeek-V3.1-Terminus](https://huggingface.co/deepseek-ai/DeepSeek-V3.1-Terminus) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm. Embedding layer and lm-head layer are fallback to 8 bits and non expert layers are fallback to 4 bits. Please refer to Section `Generate the model` for more details. Please follow the license of the original model. ## How To Use Llama.cpp inference ~~~bash ./llama-cli -hf Intel/DeepSeek-V3.1-Terminus-gguf-q2ks-mixed-AutoRound ~~~ ~~~bash > Write a quick sort algorithm. Here's a quick sort algorithm implemented in Python: ```python def quicksort(arr): """ Sorts an array using the quicksort algorithm. Args: arr: List of comparable elements Returns: Sorted list """ if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] # Choose middle element as pivot left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quicksort(left) + middle + quicksort(right) # Example usage: if __name__ == "__main__": # Test the algorithm test_array = [64, 34, 25, 12, 22, 11, 90] print("Original array:", test_array) sorted_array = quicksort(test_array) print("Sorted array:", sorted_array) ``` Here's an in-place version that uses less memory: ```python def quicksort_inplace(arr, low=0, high=None): """ In-place quicksort implementation. Args: arr: List to be sorted low: Starting index high: Ending index """ if high is None: high = len(arr) - 1 if low < high: # Partition the array and get pivot index pivot_index = partition(arr, low, high) # Recursively sort elements before and after partition quicksort_inplace(arr, low, pivot_index - 1) quicksort_inplace(arr, pivot_index + 1, high) def partition(arr, low, high): """ Partition the array using the last element as pivot. Args: arr: List to partition low: Starting index high: Ending index Returns: Index of the pivot element after partitioning """ pivot = arr[high] i = low - 1 # Index of smaller element for j in range(low, high): if arr[j] <= pivot: i += 1 arr[i], arr[j] = arr[j], arr[i] arr[i + 1], arr[high] = arr[high], arr[i + 1] return i + 1 # Example usage: if __name__ == "__main__": test_array = [64, 34, 25, 12, 22, 11, 90] print("Original array:", test_array) quicksort_inplace(test_array) print("Sorted array:", test_array) ``` ## Key Features: 1. **First version (easy to understand):** - Uses list comprehensions to create left, middle, and right partitions - More readable but uses extra memory 2. **Second version (in-place):** - More memory efficient - Uses the Lomuto partition scheme - Modifies the original array ## Time Complexity: - **Best/Average case:** O(n log n) - **Worst case:** O(n²) - when pivot is always the smallest or largest element ## Space Complexity: - **First version:** O(n) - due to creating new lists - **Second version:** O(log n) - due to recursion stack The algorithm works by selecting a 'pivot' element and partitioning the array around it, then recursively sorting the sub-arrays. ~~~ ### Generate the model Here is the sample command to reproduce the model ```python import torch from auto_round import AutoRound from auto_round.utils import llm_load_model model_name = "deepseek-ai/DeepSeek-V3.1-Terminus" model, tokenizer,_=llm_load_model(model_name,trust_remote_code=False,device="cpu") layer_config = {} for n, m in model.named_modules(): if isinstance(m,torch.nn.Embedding): layer_config[n] = {"bits": 8} if isinstance(m, torch.nn.Linear): if n=="lm_head": layer_config[n] = {"bits": 8} continue if "expert" in n and "shared_experts" not in n: layer_config[n] = {"bits": 2} elif n != "lm_head": layer_config[n] = {"bits": 4} print(n, 4) ar = AutoRound(model, tokenizer=tokenizer, iters=0, scheme="gguf:q2_k_s", low_gpu_mem_usage=True, layer_config=layer_config) ar.quantize_and_save(format="gguf:q2_k_s", output_dir="tmp_autoround") ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
AXERA-TECH/Qwen3-4B
AXERA-TECH
2025-09-25T01:52:12Z
18
0
null
[ "Qwen", "Qwen3", "Int8", "text-generation", "en", "base_model:Qwen/Qwen3-4B", "base_model:finetune:Qwen/Qwen3-4B", "license:apache-2.0", "region:us" ]
text-generation
2025-04-30T09:26:37Z
--- license: apache-2.0 language: - en base_model: - Qwen/Qwen3-4B pipeline_tag: text-generation tags: - Qwen - Qwen3 - Int8 --- # Qwen3-4B-Int8 This version of Qwen3-4B-Int8 has been converted to run on the Axera NPU using **w8a16** quantization. This model has been optimized with the following LoRA: Compatible with Pulsar2 version: 4.2(Not released yet) ## Convert tools links: For those who are interested in model conversion, you can try to export axmodel through the original repo : https://huggingface.co/Qwen/Qwen3-4B [Pulsar2 Link, How to Convert LLM from Huggingface to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/appendix/build_llm.html) [AXera NPU LLM Runtime](https://github.com/AXERA-TECH/ax-llm) ## Support Platform - AX650 - [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html) - [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html) |Chips|w8a16|w4a16| |--|--|--| |AX650| 4.5 tokens/sec|TBD| ## How to use Download all files from this repository to the device ``` root@ax650:/mnt/qtang/llm-test/qwen3-4b# tree -L 1 . |-- config.json |-- main_ax650 |-- main_axcl_aarch64 |-- main_axcl_x86 |-- post_config.json |-- qwen2.5_tokenizer |-- qwen3-4b-ax650 |-- qwen3_tokenizer |-- qwen3_tokenizer_uid.py |-- run_qwen3_4b_int8_ctx_ax650.sh |-- run_qwen3_4b_int8_ctx_axcl_aarch64.sh `-- run_qwen3_4b_int8_ctx_axcl_x86.sh 3 directories, 9 files root@ax650:/mnt/qtang/llm-test/qwen3-4b# ``` #### Start the Tokenizer service Install requirement ``` pip install transformers jinja2 ``` ``` root@ax650:/mnt/qtang/llm-test/qwen3-4b# python3 qwen3_tokenizer_uid.py None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. Server running at http://0.0.0.0:12345 ``` #### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro) or AX650N DEMO Board Open another terminal and run `run_qwen3_4b_int8_ctx_ax650.sh` ``` root@ax650:/mnt/qtang/llm-test/qwen3-4b# ./run_qwen3_4b_int8_ctx_ax650.sh [I][ Init][ 110]: LLM init start [I][ Init][ 34]: connect http://127.0.0.1:12345 ok [I][ Init][ 57]: uid: 6e90ff82-b9c9-42dc-8f61-081203389166 bos_id: -1, eos_id: 151645 2% | █ | 1 / 39 [3.95s<153.89s, 0.25 count/s] tokenizer init ok [I][ Init][ 26]: LLaMaEmbedSelector use mmap 100% | ████████████████████████████████ | 39 / 39 [48.03s<48.03s, 0.81 count/s] init post axmodel ok,remain_cmm(5621 MB) [I][ Init][ 188]: max_token_len : 2559 [I][ Init][ 193]: kv_cache_size : 1024, kv_cache_num: 2559 [I][ Init][ 201]: prefill_token_num : 128 [I][ Init][ 205]: grp: 1, prefill_max_token_num : 1 [I][ Init][ 205]: grp: 2, prefill_max_token_num : 256 [I][ Init][ 205]: grp: 3, prefill_max_token_num : 512 [I][ Init][ 205]: grp: 4, prefill_max_token_num : 1024 [I][ Init][ 205]: grp: 5, prefill_max_token_num : 1536 [I][ Init][ 205]: grp: 6, prefill_max_token_num : 2048 [I][ Init][ 209]: prefill_max_token_num : 2048 [I][ load_config][ 282]: load config: { "enable_repetition_penalty": false, "enable_temperature": false, "enable_top_k_sampling": true, "enable_top_p_sampling": false, "penalty_window": 20, "repetition_penalty": 1.2, "temperature": 0.9, "top_k": 1, "top_p": 0.8 } [I][ Init][ 218]: LLM init ok Type "q" to exit, Ctrl+c to stop current running [I][ GenerateKVCachePrefill][ 270]: input token num : 21, prefill_split_num : 1 prefill_grpid : 2 [I][ GenerateKVCachePrefill][ 307]: input_num_token:21 [I][ main][ 230]: precompute_len: 21 [I][ main][ 231]: system_prompt: You are Qwen, created by Alibaba Cloud. You are a helpful assistant. prompt >> 1+3=? [I][ SetKVCache][ 530]: prefill_grpid:2 kv_cache_num:256 precompute_len:21 input_num_token:16 [I][ SetKVCache][ 533]: current prefill_max_token_num:1920 [I][ Run][ 659]: input token num : 16, prefill_split_num : 1 [I][ Run][ 685]: input_num_token:16 [I][ Run][ 808]: ttft: 1169.05 ms <think> </think> 1 + 3 = 4 [N][ Run][ 922]: hit eos,avg 4.22 token/s [I][ GetKVCache][ 499]: precompute_len:48, remaining:2000 prompt >> who are you? [I][ SetKVCache][ 530]: prefill_grpid:2 kv_cache_num:256 precompute_len:48 input_num_token:16 [I][ SetKVCache][ 533]: current prefill_max_token_num:1920 [I][ Run][ 659]: input token num : 16, prefill_split_num : 1 [I][ Run][ 685]: input_num_token:16 [I][ Run][ 808]: ttft: 1168.56 ms <think> </think> I am Qwen, a large-scale language model developed by Alibaba Cloud. I can answer questions, create content, and help with a variety of tasks. How can I assist you today? [N][ Run][ 922]: hit eos,avg 4.22 token/s [I][ GetKVCache][ 499]: precompute_len:106, remaining:1942 prompt >> q root@ax650:/mnt/qtang/llm-test/qwen3-4b# ``` #### Inference with M.2 Accelerator card [What is M.2 Accelerator card?](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html), Show this DEMO based on Raspberry PI 5. ``` (base) axera@raspberrypi:~/samples/qwen3-4b $ ./run_qwen3_4b_int8_ctx_axcl_aarch64.sh [I][ Init][ 136]: LLM init start [I][ Init][ 34]: connect http://127.0.0.1:12345 ok [I][ Init][ 57]: uid: a5b1e427-0cdf-4da6-b3a7-f5e0517da0bb bos_id: -1, eos_id: 151645 2% | █ | 1 / 39 [0.99s<38.45s, 1.01 count/s] tokenizer init ok [I][ Init][ 45]: LLaMaEmbedSelector use mmap 5% | ██ | 2 / 39 [0.99s<19.23s, 2.03 count/s] embed_selector init ok [I][ run][ 30]: AXCLWorker start with devid 0 100% | ████████████████████████████████ | 39 / 39 [133.16s<133.16s, 0.29 count/s] init post axmodel ok,remain_cmm(691 MB)(1096 MB)000000000 [I][ Init][ 237]: max_token_len : 2559 [I][ Init][ 240]: kv_cache_size : 1024, kv_cache_num: 2559 [I][ Init][ 248]: prefill_token_num : 128 [I][ Init][ 252]: grp: 1, prefill_max_token_num : 1 [I][ Init][ 252]: grp: 2, prefill_max_token_num : 256 [I][ Init][ 252]: grp: 3, prefill_max_token_num : 512 [I][ Init][ 252]: grp: 4, prefill_max_token_num : 1024 [I][ Init][ 252]: grp: 5, prefill_max_token_num : 1536 [I][ Init][ 252]: grp: 6, prefill_max_token_num : 2048 [I][ Init][ 256]: prefill_max_token_num : 2048 ________________________ | ID| remain cmm(MB)| ======================== | 0| 691| ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯ [I][ load_config][ 282]: load config: { "enable_repetition_penalty": false, "enable_temperature": false, "enable_top_k_sampling": true, "enable_top_p_sampling": false, "penalty_window": 20, "repetition_penalty": 1.2, "temperature": 0.9, "top_k": 1, "top_p": 0.8 } [I][ Init][ 279]: LLM init ok Type "q" to exit, Ctrl+c to stop current running [I][ GenerateKVCachePrefill][ 335]: input token num : 21, prefill_split_num : 1 prefill_grpid : 2 [I][ GenerateKVCachePrefill][ 372]: input_num_token:21 [I][ main][ 236]: precompute_len: 21 [I][ main][ 237]: system_prompt: You are Qwen, created by Alibaba Cloud. You are a helpful assistant. prompt >> who are you [I][ SetKVCache][ 628]: prefill_grpid:2 kv_cache_num:256 precompute_len:21 input_num_token:27 [I][ SetKVCache][ 631]: current prefill_max_token_num:1920 [I][ Run][ 869]: input token num : 27, prefill_split_num : 1 [I][ Run][ 901]: input_num_token:27 [I][ Run][1030]: ttft: 1339.01 ms <think> </think> I am Qwen, a large-scale language model developed by Alibaba Cloud. I can answer questions, create content, and help with a variety of tasks. What can I assist you with? [N][ Run][1182]: hit eos,avg 3.65 token/s [I][ GetKVCache][ 597]: precompute_len:90, remaining:1958 prompt >> q [I][ run][ 80]: AXCLWorker exit with devid 0 (base) axera@raspberrypi:~/samples/qwen3-4b $ (base) axera@raspberrypi:~ $ axcl-smi +------------------------------------------------------------------------------------------------+ | AXCL-SMI V3.4.0_20250423020139 Driver V3.4.0_20250423020139 | +-----------------------------------------+--------------+---------------------------------------+ | Card Name Firmware | Bus-Id | Memory-Usage | | Fan Temp Pwr:Usage/Cap | CPU NPU | CMM-Usage | |=========================================+==============+=======================================| | 0 AX650N V3.4.0 | 0000:01:00.0 | 193 MiB / 945 MiB | | -- 37C -- / -- | 2% 0% | 6348 MiB / 7040 MiB | +-----------------------------------------+--------------+---------------------------------------+ +------------------------------------------------------------------------------------------------+ | Processes: | | Card PID Process Name NPU Memory Usage | |================================================================================================| | 0 84643 /home/axera/samples/qwen3-4b/main_axcl_aarch64 4894032 KiB | +------------------------------------------------------------------------------------------------+ (base) axera@raspberrypi:~ $ ```
yaochao1/test-hf
yaochao1
2025-09-25T01:49:49Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-25T01:49:49Z
--- license: apache-2.0 ---
kibaraki/wav2vec2-large-xlsr-53-shinekhen-buryat-random
kibaraki
2025-09-25T01:47:14Z
9
0
null
[ "safetensors", "wav2vec2", "automatic-speech-recognition", "dataset:kibaraki/Shinekhen-Buryat", "arxiv:2509.15373", "base_model:facebook/wav2vec2-large-xlsr-53", "base_model:finetune:facebook/wav2vec2-large-xlsr-53", "license:cc-by-sa-4.0", "region:us" ]
automatic-speech-recognition
2025-09-17T20:58:33Z
--- license: cc-by-sa-4.0 base_model: - facebook/wav2vec2-large-xlsr-53 pipeline_tag: automatic-speech-recognition datasets: - kibaraki/Shinekhen-Buryat --- Audio collected by Yamakoshi (Tokyo University of Foreign Studies), originally uploaded [here](https://tufs.repo.nii.ac.jp/search?search_type=2&q=1729497608274) [(CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/deed.en). Audio is converted to per-sentence audio clips. Used in [[paper]](https://arxiv.org/abs/2509.15373) [[GitHub]](https://github.com/kibaraki/frustratingly-easy-asr-augmentation) fl_e30_b4_lr1e-4_cer_random873+shib
kibaraki/wav2vec2-large-xlsr-53-shinekhen-buryat-SpecAugment
kibaraki
2025-09-25T01:47:00Z
6
0
null
[ "safetensors", "wav2vec2", "automatic-speech-recognition", "dataset:kibaraki/Shinekhen-Buryat", "arxiv:2509.15373", "base_model:facebook/wav2vec2-large-xlsr-53", "base_model:finetune:facebook/wav2vec2-large-xlsr-53", "license:cc-by-sa-4.0", "region:us" ]
automatic-speech-recognition
2025-09-17T20:35:27Z
--- license: cc-by-sa-4.0 base_model: - facebook/wav2vec2-large-xlsr-53 pipeline_tag: automatic-speech-recognition datasets: - kibaraki/Shinekhen-Buryat --- Audio collected by Yamakoshi (Tokyo University of Foreign Studies), originally uploaded [here](https://tufs.repo.nii.ac.jp/search?search_type=2&q=1729497608274) [(CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/deed.en). Audio is converted to per-sentence audio clips. Used in [[paper]](https://arxiv.org/abs/2509.15373) [[GitHub]](https://github.com/kibaraki/frustratingly-easy-asr-augmentation) fl_e30_b4_lr1e-4_cer_0_clean_spec-aug0.3_5
corzamennav/blockassist-bc-territorial_wild_antelope_1758764752
corzamennav
2025-09-25T01:47:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "territorial wild antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-25T01:46:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - territorial wild antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Lennard-Heuer/results
Lennard-Heuer
2025-09-25T01:45:03Z
7
0
peft
[ "peft", "safetensors", "base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B", "lora", "transformers", "text-generation", "conversational", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B", "license:mit", "region:us" ]
text-generation
2025-09-09T15:37:41Z
--- library_name: peft license: mit base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B tags: - base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B - lora - transformers pipeline_tag: text-generation model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Framework versions - PEFT 0.17.1 - Transformers 4.56.1 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0
NetherQuartz/tatoeba-tok-multi-qwen3-4b
NetherQuartz
2025-09-25T01:37:33Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen3-4B-Base", "base_model:finetune:Qwen/Qwen3-4B-Base", "endpoints_compatible", "region:us" ]
null
2025-09-24T02:42:08Z
--- base_model: Qwen/Qwen3-4B-Base library_name: transformers model_name: tatoeba-tok-multi-qwen3-4b tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for tatoeba-tok-multi-qwen3-4b This model is a fine-tuned version of [Qwen/Qwen3-4B-Base](https://huggingface.co/Qwen/Qwen3-4B-Base). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="NetherQuartz/tatoeba-tok-multi-qwen3-4b", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.1 - Pytorch: 2.7.1+cu128 - Datasets: 3.6.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
lhkhiem28/CORE2-llama-3.2-3b-MATH-verified
lhkhiem28
2025-09-25T01:36:35Z
1
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "grpo", "trl", "hf_jobs", "conversational", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-3B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T18:44:25Z
--- base_model: meta-llama/Llama-3.2-3B-Instruct library_name: transformers model_name: CORE2-llama-3.2-3b-MATH-verified tags: - generated_from_trainer - grpo - trl - hf_jobs licence: license --- # Model Card for CORE2-llama-3.2-3b-MATH-verified This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="lhkhiem28/CORE2-llama-3.2-3b-MATH-verified", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kle3/CORE2/runs/hngk46yr) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.22.2 - Transformers: 4.55.4 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citations Cite GRPO as: ```bibtex @article{shao2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
gbrx/GonzaLomo
gbrx
2025-09-25T01:35:34Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-07-20T21:51:21Z
--- license: apache-2.0 ---
King-8/creative-energy-analyzer
King-8
2025-09-25T01:28:04Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-09-25T01:27:56Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: creative-energy-analyzer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # creative-energy-analyzer This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6559 - Accuracy: 0.7667 - Precision: 0.7630 - Recall: 0.7667 - F1: 0.7618 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 1.2884 | 1.0 | 120 | 1.2956 | 0.5 | 0.4050 | 0.5 | 0.4412 | | 0.8774 | 2.0 | 240 | 0.9885 | 0.6 | 0.6306 | 0.6 | 0.5836 | | 0.515 | 3.0 | 360 | 0.8751 | 0.6417 | 0.6590 | 0.6417 | 0.6466 | | 0.3956 | 4.0 | 480 | 0.8428 | 0.6583 | 0.6679 | 0.6583 | 0.6564 | | 0.2319 | 5.0 | 600 | 0.8588 | 0.65 | 0.6633 | 0.65 | 0.6443 | ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu126 - Datasets 4.1.1 - Tokenizers 0.22.0
corzamennav/blockassist-bc-territorial_wild_antelope_1758763520
corzamennav
2025-09-25T01:26:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "territorial wild antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-25T01:26:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - territorial wild antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
s0ftl1pa/ppo-LunarLander-v3
s0ftl1pa
2025-09-25T01:25:41Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-09-25T01:24:48Z
--- library_name: stable-baselines3 tags: - LunarLander-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v3 type: LunarLander-v3 metrics: - type: mean_reward value: 257.58 +/- 44.36 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v3** This is a trained model of a **PPO** agent playing **LunarLander-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
mochunnian/CFLD
mochunnian
2025-09-25T01:24:22Z
0
0
null
[ "pytorch", "license:apache-2.0", "region:us" ]
null
2025-09-25T01:19:25Z
--- license: apache-2.0 ---
rzheng18/Qwen_android_ablation2_final_LR_1e-5_epoch_1
rzheng18
2025-09-25T01:24:17Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "sft", "trl", "alignment-handbook", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-25T01:04:08Z
--- base_model: Qwen/Qwen2.5-7B-Instruct library_name: transformers tags: - generated_from_trainer - sft - trl - alignment-handbook licence: license --- # Model Card for None This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ruichen-zheng-ucla/huggingface/runs/oft5yyxh) This model was trained with SFT. ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.2 - Pytorch: 2.6.0+cu126 - Datasets: 4.1.1 - Tokenizers: 0.22.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
barbosarafael/my_awesome_eli5_clm-model
barbosarafael
2025-09-25T01:22:32Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:distilbert/distilgpt2", "base_model:finetune:distilbert/distilgpt2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-25T01:02:53Z
--- library_name: transformers license: apache-2.0 base_model: distilbert/distilgpt2 tags: - generated_from_trainer model-index: - name: my_awesome_eli5_clm-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_eli5_clm-model This model is a fine-tuned version of [distilbert/distilgpt2](https://huggingface.co/distilbert/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.8276 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.9133 | 1.0 | 1339 | 3.8400 | | 3.819 | 2.0 | 2678 | 3.8282 | | 3.7841 | 3.0 | 4017 | 3.8276 | ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu128 - Datasets 4.1.1 - Tokenizers 0.22.0
zjhhhh/qwen2.5_3B_Instruct_5selection_multi_beta_1_eta_1e6_step_312_final
zjhhhh
2025-09-25T01:22:03Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-25T01:21:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
miladfa7/picth_vision_checkpoint_2
miladfa7
2025-09-25T01:14:26Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base-finetuned-kinetics", "base_model:finetune:MCG-NJU/videomae-base-finetuned-kinetics", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2025-09-24T18:52:46Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base-finetuned-kinetics tags: - generated_from_trainer metrics: - accuracy model-index: - name: picth_vision_checkpoint_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # picth_vision_checkpoint_2 This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0293 - Accuracy: 0.9974 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 12552 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0314 | 0.25 | 3138 | 0.0665 | 0.9825 | | 0.0001 | 1.25 | 6276 | 0.0820 | 0.9869 | | 0.03 | 2.25 | 9414 | 0.0320 | 0.9956 | | 0.0 | 3.25 | 12552 | 0.0293 | 0.9974 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.3.1+cu121 - Datasets 3.6.0 - Tokenizers 0.21.1
alphaduriendur/Qwen2.5-Coder-3B-Instruct-Q4_K_M-GGUF
alphaduriendur
2025-09-25T01:13:43Z
0
0
transformers
[ "transformers", "gguf", "code", "codeqwen", "chat", "qwen", "qwen-coder", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:Qwen/Qwen2.5-Coder-3B-Instruct", "base_model:quantized:Qwen/Qwen2.5-Coder-3B-Instruct", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-09-25T01:13:30Z
--- license: other license_name: qwen-research license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct/blob/main/LICENSE language: - en base_model: Qwen/Qwen2.5-Coder-3B-Instruct pipeline_tag: text-generation library_name: transformers tags: - code - codeqwen - chat - qwen - qwen-coder - llama-cpp - gguf-my-repo --- # alphaduriendur/Qwen2.5-Coder-3B-Instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`Qwen/Qwen2.5-Coder-3B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo alphaduriendur/Qwen2.5-Coder-3B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-coder-3b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo alphaduriendur/Qwen2.5-Coder-3B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-coder-3b-instruct-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo alphaduriendur/Qwen2.5-Coder-3B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-coder-3b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo alphaduriendur/Qwen2.5-Coder-3B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-coder-3b-instruct-q4_k_m.gguf -c 2048 ```
ChenWu98/qwen_2.5_1.5b_sft_type_split_1
ChenWu98
2025-09-25T01:13:41Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-1.5B", "base_model:finetune:Qwen/Qwen2.5-1.5B", "endpoints_compatible", "region:us" ]
null
2025-09-07T02:46:34Z
--- base_model: Qwen/Qwen2.5-1.5B library_name: transformers model_name: qwen_2.5_1.5b_sft_type_split_1 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for qwen_2.5_1.5b_sft_type_split_1 This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/5qj300rm) This model was trained with SFT. ### Framework versions - TRL: 0.19.1 - Transformers: 4.51.1 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
PrunaAI/test-save-tiny-stable-diffusion-pipe-smashed-pro
PrunaAI
2025-09-25T01:12:59Z
25
0
diffusers
[ "diffusers", "safetensors", "pruna-ai", "pruna_pro-ai", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-07-03T09:19:54Z
--- library_name: diffusers tags: - safetensors - pruna-ai - pruna_pro-ai --- # Model Card for PrunaAI/test-save-tiny-stable-diffusion-pipe-smashed-pro This model was created using the [pruna](https://github.com/PrunaAI/pruna) library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead. ## Usage First things first, you need to install the pruna library: ```bash pip install pruna_pro ``` You can [use the diffusers library to load the model](https://huggingface.co/PrunaAI/test-save-tiny-stable-diffusion-pipe-smashed-pro?library=diffusers) but this might not include all optimizations by default. To ensure that all optimizations are applied, use the pruna library to load the model using the following code: ```python from pruna_pro import PrunaProModel loaded_model = PrunaProModel.from_pretrained( "PrunaAI/test-save-tiny-stable-diffusion-pipe-smashed-pro" ) # we can then run inference using the methods supported by the base model ``` For inference, you can use the inference methods of the original model like shown in [the original model card](https://huggingface.co/hf-internal-testing/tiny-stable-diffusion-pipe?library=diffusers). Alternatively, you can visit [the Pruna documentation](https://docs.pruna.ai/en/stable/) for more information. ## Smash Configuration The compression configuration of the model is stored in the `smash_config.json` file, which describes the optimization methods that were applied to the model. ```bash { "batcher": null, "cacher": null, "compiler": null, "distiller": null, "distributer": null, "enhancer": null, "factorizer": null, "kernel": null, "pruner": null, "quantizer": null, "recoverer": null, "batch_size": 1, "device": "cpu", "device_map": null, "save_fns": [], "load_fns": [ "diffusers" ], "reapply_after_load": { "factorizer": null, "pruner": null, "quantizer": null, "distiller": null, "kernel": null, "cacher": null, "recoverer": null, "distributer": null, "compiler": null, "batcher": null, "enhancer": null } } ``` ## 🌍 Join the Pruna AI community! [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/JFQmtFKCjd) [![Reddit](https://img.shields.io/reddit/subreddit-subscribers/PrunaAI?style=social)](https://www.reddit.com/r/PrunaAI/)
panda19904/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-extinct_wise_caterpillar
panda19904
2025-09-25T01:10:08Z
144
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am extinct_wise_caterpillar", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T05:52:00Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am extinct_wise_caterpillar --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
seleniumpython/Qwen3-Coder-30B-A3B-Instruct-Q4_K_M-GGUF
seleniumpython
2025-09-25T01:09:16Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:Qwen/Qwen3-Coder-30B-A3B-Instruct", "base_model:quantized:Qwen/Qwen3-Coder-30B-A3B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-09-25T01:08:00Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct/blob/main/LICENSE pipeline_tag: text-generation base_model: Qwen/Qwen3-Coder-30B-A3B-Instruct tags: - llama-cpp - gguf-my-repo --- # seleniumpython/Qwen3-Coder-30B-A3B-Instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`Qwen/Qwen3-Coder-30B-A3B-Instruct`](https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo seleniumpython/Qwen3-Coder-30B-A3B-Instruct-Q4_K_M-GGUF --hf-file qwen3-coder-30b-a3b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo seleniumpython/Qwen3-Coder-30B-A3B-Instruct-Q4_K_M-GGUF --hf-file qwen3-coder-30b-a3b-instruct-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo seleniumpython/Qwen3-Coder-30B-A3B-Instruct-Q4_K_M-GGUF --hf-file qwen3-coder-30b-a3b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo seleniumpython/Qwen3-Coder-30B-A3B-Instruct-Q4_K_M-GGUF --hf-file qwen3-coder-30b-a3b-instruct-q4_k_m.gguf -c 2048 ```
jacksonhwebster/gpt2-lora_extreme_advice_merged_model
jacksonhwebster
2025-09-25T01:09:08Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-25T01:08:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kennydaglish/Qwen3-0.6B-Gensyn-Swarm-sleek_camouflaged_beaver
kennydaglish
2025-09-25T01:08:43Z
7
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am sleek_camouflaged_beaver", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T14:08:55Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am sleek_camouflaged_beaver --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jacksonhwebster/lora_extreme_advice_merged_model
jacksonhwebster
2025-09-25T01:07:07Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-25T01:06:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rebirthmonkey/gpt2-reuters-tokenizer
rebirthmonkey
2025-09-25T01:06:54Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-25T01:06:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nurdi/blockassist
nurdi
2025-09-25T01:06:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scavenging powerful chinchilla", "arxiv:2504.07091", "region:us" ]
null
2025-09-17T13:10:25Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scavenging powerful chinchilla --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
cjkasbdkjnlakb/agent-0924
cjkasbdkjnlakb
2025-09-25T01:03:25Z
0
0
peft
[ "peft", "safetensors", "qwen3", "text-generation", "axolotl", "base_model:adapter:Qwen/Qwen3-4B-Instruct-2507", "lora", "transformers", "conversational", "dataset:custom", "base_model:Qwen/Qwen3-4B-Instruct-2507", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-25T01:03:10Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen3-4B-Instruct-2507 tags: - axolotl - base_model:adapter:Qwen/Qwen3-4B-Instruct-2507 - lora - transformers datasets: - custom pipeline_tag: text-generation model-index: - name: workspace/train_dir_0924/checkpoints results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.12.2` ```yaml # Automatically upload checkpoint and final model to HF # hub_model_id: username/custom_model_name # 是否以 8-bit 精度加载模型 load_in_8bit: false # 是否以 4-bit 精度加载模型(与QLoRA绑定, 强制使用) load_in_4bit: false # 是否严格匹配模型结构,关闭表示可加载少部分差异结构(如以适配 adapter) # strict: false base_model: Qwen/Qwen3-4B-Instruct-2507 # 数据集设置 chat_template: qwen3 datasets: - path: /workspace/train_dir_0924/all_data.json # - 表示列表(list)中的一项, 即可以同时使用多个数据集 type: chat_template # chat_template(自定义格式) alpaca roles_to_train: ["assistant"] field_messages: messages # 标识的字段 message_property_mappings: # message_property_mappings={'role':'role', 'content':'content'}) role: role content: content dataset_prepared_path: val_set_size: 0.08 output_dir: /workspace/train_dir_0924/checkpoints sequence_len: 16384 # 模型所能处理的最大上下文长度(默认2048) pad_to_sequence_len: false # context_parallel_size: 2 # 长序列拆分至多个GPU(强制要求 mirco_batch_size: 1) sample_packing: false # 在训练时将多个样本拼接(packing)成一个长序列(sequence_len)输入到模型中,以提高训练效率。 eval_sample_packing: false # 评估时拼接多个样本 # 训练超参数 adapter: lora # lora qlora lora_model_dir: lora_r: 32 # lora_r默认首选 16,平衡精度与显存 lora_alpha: 64 # 缩放系数,用于控制 LoRA 的影响力, 一般设为 2*r 或 4*r lora_dropout: 0.05 # 从0.05改为0.1,增加dropout lora_target_linear: true micro_batch_size: 4 # 微批次大小 94G的H100可以设为4(Token为1w) gradient_accumulation_steps: 4 # 梯度累积: 将多个微批次的梯度(micro_batch_size)累积起来,然后更新模型权重 有效 Batch 常取 16: 小于 8 训练会抖,大于 32 只会更耗时、收益有限 auto_find_batch_size: false # 允许Axolotl不断调整batch_size ⚠️Zero-3不适用 num_epochs: 3 optimizer: adamw_torch_fused lr_scheduler: cosine learning_rate: 5e-5 # bf16: auto + tf32: true,可获得更好的稳定性和性能。 bf16: auto tf32: true # early_stopping_patience: gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false # auto_resume_from_checkpoints: true #自动从output_dir寻找最新checkpoint断点恢复 logging_steps: 1 logging_dir: /workspace/train_dir_0924/logs flash_attention: true warmup_ratio: 0.03 evals_per_epoch: 8 # 增加评估频次,从4改为8 saves_per_epoch: 1 # 增加保存频次,便于选择最佳checkpoint weight_decay: 0.01 # 从0.0改为0.01,增加正则化 fsdp: - full_shard - auto_wrap fsdp_config: fsdp_limit_all_gathers: true fsdp_sync_module_states: true fsdp_offload_params: false # H200显存足够,无需offload fsdp_use_orig_params: false fsdp_cpu_ram_efficient_loading: true fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP fsdp_transformer_layer_cls_to_wrap: Qwen3DecoderLayer fsdp_state_dict_type: FULL_STATE_DICT fsdp_sharding_strategy: FULL_SHARD ``` </details><br> # workspace/train_dir_0924/checkpoints This model is a fine-tuned version of [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) on the /workspace/train_dir_0924/all_data.json dataset. It achieves the following results on the evaluation set: - Loss: 0.0463 - Memory/max Mem Active(gib): 95.98 - Memory/max Mem Allocated(gib): 95.98 - Memory/device Mem Reserved(gib): 111.41 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - total_eval_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 47 - training_steps: 1578 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mem Active(gib) | Mem Allocated(gib) | Mem Reserved(gib) | |:-------------:|:------:|:----:|:---------------:|:---------------:|:------------------:|:-----------------:| | No log | 0 | 0 | 0.9792 | 48.13 | 47.95 | 48.76 | | 0.0796 | 0.1255 | 66 | 0.0796 | 62.91 | 62.71 | 73.19 | | 0.0615 | 0.2511 | 132 | 0.0662 | 66.5 | 66.31 | 76.92 | | 0.064 | 0.3766 | 198 | 0.0616 | 66.5 | 66.31 | 76.92 | | 0.0673 | 0.5021 | 264 | 0.0591 | 72.68 | 72.48 | 84.29 | | 0.0574 | 0.6277 | 330 | 0.0568 | 72.68 | 72.48 | 84.29 | | 0.053 | 0.7532 | 396 | 0.0553 | 72.68 | 72.48 | 84.29 | | 0.0631 | 0.8787 | 462 | 0.0543 | 72.68 | 72.48 | 84.29 | | 0.0571 | 1.0038 | 528 | 0.0528 | 72.68 | 72.48 | 84.29 | | 0.0507 | 1.1293 | 594 | 0.0522 | 72.8 | 72.6 | 84.29 | | 0.0498 | 1.2549 | 660 | 0.0513 | 72.8 | 72.6 | 84.29 | | 0.0446 | 1.3804 | 726 | 0.0507 | 72.8 | 72.6 | 84.29 | | 0.0525 | 1.5059 | 792 | 0.0498 | 95.98 | 95.98 | 111.41 | | 0.0518 | 1.6315 | 858 | 0.0493 | 95.98 | 95.98 | 111.41 | | 0.0445 | 1.7570 | 924 | 0.0486 | 95.98 | 95.98 | 111.41 | | 0.0593 | 1.8825 | 990 | 0.0482 | 95.98 | 95.98 | 111.41 | | 0.0491 | 2.0076 | 1056 | 0.0478 | 95.98 | 95.98 | 111.41 | | 0.0459 | 2.1331 | 1122 | 0.0475 | 95.98 | 95.98 | 111.41 | | 0.0393 | 2.2587 | 1188 | 0.0471 | 95.98 | 95.98 | 111.41 | | 0.048 | 2.3842 | 1254 | 0.0468 | 95.98 | 95.98 | 111.41 | | 0.0532 | 2.5097 | 1320 | 0.0466 | 95.98 | 95.98 | 111.41 | | 0.0395 | 2.6353 | 1386 | 0.0465 | 95.98 | 95.98 | 111.41 | | 0.0457 | 2.7608 | 1452 | 0.0464 | 95.98 | 95.98 | 111.41 | | 0.0528 | 2.8864 | 1518 | 0.0463 | 95.98 | 95.98 | 111.41 | ### Framework versions - PEFT 0.17.0 - Transformers 4.55.2 - Pytorch 2.6.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
adalberto-temp/energy_apo_V0.1_long
adalberto-temp
2025-09-25T01:01:43Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-25T00:55:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
modelsignature/mistral-7b-v0.3
modelsignature
2025-09-25T00:56:24Z
0
0
vllm
[ "vllm", "safetensors", "mistral", "mistral-common", "license:apache-2.0", "region:us" ]
null
2025-09-25T00:00:27Z
--- library_name: vllm license: apache-2.0 extra_gated_description: >- If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/fr/terms/">Privacy Policy</a>. tags: - mistral-common --- # Model Card for Mistral-7B-v0.3 The Mistral-7B-v0.3 Large Language Model (LLM) is a Mistral-7B-v0.2 with extended vocabulary. Mistral-7B-v0.3 has the following changes compared to [Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-v0.2/edit/main/README.md) - Extended vocabulary to 32768 ## Installation It is recommended to use `mistralai/Mistral-7B-v0.3` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling. ``` pip install mistral_inference ``` ## Download ```py from huggingface_hub import snapshot_download from pathlib import Path mistral_models_path = Path.home().joinpath('mistral_models', '7B-v0.3') mistral_models_path.mkdir(parents=True, exist_ok=True) snapshot_download(repo_id="mistralai/Mistral-7B-v0.3", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path) ``` ### Demo After installing `mistral_inference`, a `mistral-demo` CLI command should be available in your environment. ``` mistral-demo $HOME/mistral_models/7B-v0.3 ``` Should give something along the following lines: ``` This is a test of the emergency broadcast system. This is only a test. If this were a real emergency, you would be told what to do. This is a test ===================== This is another test of the new blogging software. I’m not sure if I’m going to keep it or not. I’m not sure if I’m going to keep ===================== This is a third test, mistral AI is very good at testing. 🙂 This is a third test, mistral AI is very good at testing. 🙂 This ===================== ``` ## Generate with `transformers` If you want to use Hugging Face `transformers` to generate text, you can do something like this. ```py from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mistral-7B-v0.3" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) inputs = tokenizer("Hello my name is", return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Limitations The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall