modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
onnxmodelzoo/vit_small_patch16_224_dino_Opset17
onnxmodelzoo
2025-09-24T18:20:56Z
0
0
null
[ "onnx", "Computer_Vision", "skip", "en", "license:apache-2.0", "region:us" ]
null
2025-09-24T18:20:48Z
--- language: en license: apache-2.0 model_name: vit_small_patch16_224_dino_Opset17.onnx tags: - Computer_Vision - skip ---
onnxmodelzoo/vit_relpos_medium_patch16_224_Opset17
onnxmodelzoo
2025-09-24T18:20:32Z
0
0
null
[ "onnx", "Computer_Vision", "skip", "en", "license:apache-2.0", "region:us" ]
null
2025-09-24T18:20:21Z
--- language: en license: apache-2.0 model_name: vit_relpos_medium_patch16_224_Opset17.onnx tags: - Computer_Vision - skip ---
onnxmodelzoo/vit_relpos_base_patch32_plus_rpn_256_Opset17
onnxmodelzoo
2025-09-24T18:20:10Z
0
0
null
[ "onnx", "Computer_Vision", "skip", "en", "license:apache-2.0", "region:us" ]
null
2025-09-24T18:19:37Z
--- language: en license: apache-2.0 model_name: vit_relpos_base_patch32_plus_rpn_256_Opset17.onnx tags: - Computer_Vision - skip ---
onnxmodelzoo/vit_relpos_base_patch16_clsgap_224_Opset16
onnxmodelzoo
2025-09-24T18:19:36Z
0
0
null
[ "onnx", "Computer_Vision", "skip", "en", "license:apache-2.0", "region:us" ]
null
2025-09-24T18:19:17Z
--- language: en license: apache-2.0 model_name: vit_relpos_base_patch16_clsgap_224_Opset16.onnx tags: - Computer_Vision - skip ---
onnxmodelzoo/vit_large_patch16_384_Opset17
onnxmodelzoo
2025-09-24T18:18:13Z
0
0
null
[ "onnx", "Computer_Vision", "skip", "en", "license:apache-2.0", "region:us" ]
null
2025-09-24T18:17:09Z
--- language: en license: apache-2.0 model_name: vit_large_patch16_384_Opset17.onnx tags: - Computer_Vision - skip ---
corzamennav/blockassist-bc-territorial_wild_antelope_1758737780
corzamennav
2025-09-24T18:17:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "territorial wild antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-24T18:17:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - territorial wild antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
EshAhm/Falcon3-1B-Instruct-QLoRa-SentimentAnalysis
EshAhm
2025-09-24T18:17:03Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "base_model:adapter:tiiuae/Falcon3-1B-Instruct", "lora", "transformers", "text-generation", "conversational", "base_model:tiiuae/Falcon3-1B-Instruct", "license:other", "region:us" ]
text-generation
2025-09-24T18:16:57Z
--- library_name: peft license: other base_model: tiiuae/Falcon3-1B-Instruct tags: - base_model:adapter:tiiuae/Falcon3-1B-Instruct - lora - transformers pipeline_tag: text-generation model-index: - name: Falcon3-1B-Instruct-QLoRa-SentimentAnalysis results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Falcon3-1B-Instruct-QLoRa-SentimentAnalysis This model is a fine-tuned version of [tiiuae/Falcon3-1B-Instruct](https://huggingface.co/tiiuae/Falcon3-1B-Instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5958 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 20 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.3609 | 1.0 | 2500 | 0.5001 | | 0.4603 | 2.0 | 5000 | 0.4975 | | 0.419 | 3.0 | 7500 | 0.5027 | | 0.359 | 4.0 | 10000 | 0.5020 | | 0.5779 | 5.0 | 12500 | 0.5958 | ### Framework versions - PEFT 0.17.1 - Transformers 4.56.1 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0
t2ance/DreamSWE-32B-merged
t2ance
2025-09-24T18:16:34Z
55
0
peft
[ "peft", "safetensors", "qwen2", "text-generation", "base_model:adapter:unsloth/qwen2.5-coder-32b-instruct-bnb-4bit", "lora", "sft", "transformers", "trl", "unsloth", "conversational", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-13T20:55:37Z
--- base_model: unsloth/qwen2.5-coder-32b-instruct-bnb-4bit library_name: peft tags: - base_model:adapter:unsloth/qwen2.5-coder-32b-instruct-bnb-4bit - lora - sft - transformers - trl - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
onnxmodelzoo/vit_large_patch16_224_Opset18
onnxmodelzoo
2025-09-24T18:16:10Z
0
0
null
[ "onnx", "Computer_Vision", "skip", "en", "license:apache-2.0", "region:us" ]
null
2025-09-24T18:15:13Z
--- language: en license: apache-2.0 model_name: vit_large_patch16_224_Opset18.onnx tags: - Computer_Vision - skip ---
ryno01/Qwen3-0.6B-Gensyn-Swarm-moist_quick_heron
ryno01
2025-09-24T06:27:49Z
100
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am moist_quick_heron", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-17T21:25:29Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am moist_quick_heron --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
corzamennav/blockassist-bc-territorial_wild_antelope_1758695177
corzamennav
2025-09-24T06:27:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "territorial wild antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-24T06:27:16Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - territorial wild antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ccwendy/Llama2-tw-7B-v2.0.1-chat_GGUF
ccwendy
2025-09-24T06:26:39Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-24T06:26:39Z
--- license: apache-2.0 ---
iaankurkundan1/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gilded_rapid_ocelot
iaankurkundan1
2025-09-24T06:25:45Z
88
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am gilded_rapid_ocelot", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-27T01:57:06Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am gilded_rapid_ocelot --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ai4bharat/indic-parler-tts
ai4bharat
2025-09-24T06:25:30Z
112,672
155
transformers
[ "transformers", "safetensors", "parler_tts", "text-generation", "text-to-speech", "annotation", "en", "as", "bn", "gu", "hi", "kn", "ks", "or", "ml", "mr", "ne", "pa", "sa", "sd", "ta", "te", "ur", "om", "dataset:ai4b-hf/GLOBE-annotated", "arxiv:2402.01912", "doi:10.57967/hf/5683", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-to-speech
2024-10-28T08:29:09Z
--- library_name: transformers tags: - text-to-speech - annotation license: apache-2.0 language: - en - as - bn - gu - hi - kn - ks - or - ml - mr - ne - pa - sa - sd - ta - te - ur - om pipeline_tag: text-to-speech inference: false datasets: - ai4b-hf/GLOBE-annotated --- <img src="https://huggingface.co/datasets/parler-tts/images/resolve/main/Indic%20Parler-TTS.png" alt="Indic Parler Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Indic Parler-TTS <a target="_blank" href="https://huggingface.co/spaces/ai4bharat/indic-parler-tts"> <img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/> </a> **Indic Parler-TTS** is a multilingual Indic extension of [Parler-TTS Mini](https://huggingface.co/parler-tts/parler-tts-mini-v1.1). It is a fine-tuned version of [Indic Parler-TTS Pretrained](https://huggingface.co/ai4bharat/indic-parler-tts-pretrained), trained on a **1,806 hours** multilingual Indic and English dataset. **Indic Parler-TTS Mini** can officially speak in 20 Indic languages, making it comprehensive for regional language technologies, and in English. The **21 languages** supported are: Assamese, Bengali, Bodo, Dogri, English, Gujarati, Hindi, Kannada, Konkani, Maithili, Malayalam, Manipuri, Marathi, Nepali, Odia, Sanskrit, Santali, Sindhi, Tamil, Telugu, and Urdu. Thanks to its **better prompt tokenizer**, it can easily be extended to other languages. This tokenizer has a larger vocabulary and handles byte fallback, which simplifies multilingual training. 🚨 This work is the result of a collaboration between the **HuggingFace audio team** and the **[AI4Bharat](https://ai4bharat.iitm.ac.in/) team**. 🚨 ## 📖 Quick Index * [👨‍💻 Installation](#👨‍💻-installation) * [🛠️ Key capabilities](#🛠️-key-capabilities) * [🎲 Using a random voice](#🎲-random-voice) * [🌍 Switching languages](#[🌍-switching-languages) * [🎯 Using a specific speaker](#🎯-using-a-specific-speaker) * [Some Description Examples](#some-description-examples) * [📐Evaluation](#📐-evaluation) * [Motivation](#motivation) * [Optimizing inference](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md) ### 👨‍💻 Installation Using Parler-TTS is as simple as "bonjour". Simply install the library once: ```sh pip install git+https://github.com/huggingface/parler-tts.git ``` ## 🛠️ Key capabilities The model accepts two primary inputs: 1. **Transcript** - The text to be converted to speech. 2. **Caption** - A detailed description of how the speech should sound, e.g., "Leela speaks in a high-pitched, fast-paced, and cheerful tone, full of energy and happiness. The recording is very high quality with no background noise." ### Key Features 1. **Language Support** - **Officially supported languages**: Assamese, Bengali, Bodo, Dogri, Kannada, Malayalam, Marathi, Sanskrit, Nepali, English, Telugu, Hindi, Gujarati, Konkani, Maithili, Manipuri, Odia, Santali, Sindhi, Tamil, and Urdu. - **Unofficial support**: Chhattisgarhi, Kashmiri, Punjabi. 2. **Speaker Diversity** - **69 unique voices** across the supported languages. - Supported languages have a set of **recommended voices** optimized for naturalness and intelligibility. 3. **Emotion Rendering** - **10 languages** officially support emotion-specific prompts: Assamese, Bengali, Bodo, Dogri, Kannada, Malayalam, Marathi, Sanskrit, Nepali, and Tamil. - Emotion support for other languages exists but has not been extensively tested. - **Available emotions** include: Command, Anger, Narration, Conversation, Disgust, Fear, Happy, Neutral, Proper Noun, News, Sad, and Surprise. 4. **Accent Flexibility** - The model **officially supports Indian English accents** through its English voices, providing clear and natural speech. - For other accents, the model allows customization by specifying accent details, such as "A male British speaker" or "A female American speaker," using style transfer for more dynamic and personalized outputs. 5. **Customizable Output** Indic Parler-TTS offers precise control over various speech characteristics using the **caption** input: - **Background Noise**: Adjust the noise level in the audio, from clear to slightly noisy environments. - **Reverberation**: Control the perceived distance of the voice, from close-sounding to distant-sounding speech. - **Expressivity**: Specify how dynamic or monotone the speech should be, ranging from expressive to slightly expressive or monotone. - **Pitch**: Modify the pitch of the speech, including high, low, or balanced tones. - **Speaking Rate**: Change the speaking rate, from slow to fast. - **Voice Quality**: Control the overall clarity and naturalness of the speech, adjusting from basic to refined voice quality. ## 🎲 Random voice 🚨 Unlike previous versions of Parler-TTS, here we use two tokenizers - one for the prompt and one for the description. 🚨 **Indic Parler-TTS** has been trained to generate speech with features that can be controlled with a simple text prompt, for example: ```py import torch from parler_tts import ParlerTTSForConditionalGeneration from transformers import AutoTokenizer import soundfile as sf device = "cuda:0" if torch.cuda.is_available() else "cpu" model = ParlerTTSForConditionalGeneration.from_pretrained("ai4bharat/indic-parler-tts").to(device) tokenizer = AutoTokenizer.from_pretrained("ai4bharat/indic-parler-tts") description_tokenizer = AutoTokenizer.from_pretrained(model.config.text_encoder._name_or_path) prompt = "Hey, how are you doing today?" description = "A female speaker with a British accent delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up." description_input_ids = description_tokenizer(description, return_tensors="pt").to(device) prompt_input_ids = tokenizer(prompt, return_tensors="pt").to(device) generation = model.generate(input_ids=description_input_ids.input_ids, attention_mask=description_input_ids.attention_mask, prompt_input_ids=prompt_input_ids.input_ids, prompt_attention_mask=prompt_input_ids.attention_mask) audio_arr = generation.cpu().numpy().squeeze() sf.write("indic_tts_out.wav", audio_arr, model.config.sampling_rate) ``` Indic Parler-TTS provides highly effective control over key aspects of speech synthesis using descriptive captions. Below is a summary of what each control parameter can achieve: | **Control Type** | **Capabilities** | |--------------------------|----------------------------------------------------------------------------------| | **Background Noise** | Adjusts the level of background noise, supporting clear and slightly noisy environments. | | **Reverberation** | Controls the perceived distance of the speaker’s voice, allowing close or distant sounds. | | **Expressivity** | Modulates the emotional intensity of speech, from monotone to highly expressive. | | **Pitch** | Varies the pitch to achieve high, low, or moderate tonal output. | | **Speaking Rate** | Changes the speed of speech delivery, ranging from slow to fast-paced. | | **Speech Quality** | Improves or degrades the overall audio clarity, supporting basic to refined outputs. | ## 🌍 Switching languages The model automatically adapts to the language it detects in the prompt. You don't need to specify the language you want to use. For example, to switch to Hindi, simply use an Hindi prompt: ```py import torch from parler_tts import ParlerTTSForConditionalGeneration from transformers import AutoTokenizer import soundfile as sf device = "cuda:0" if torch.cuda.is_available() else "cpu" model = ParlerTTSForConditionalGeneration.from_pretrained("ai4bharat/indic-parler-tts").to(device) tokenizer = AutoTokenizer.from_pretrained("ai4bharat/indic-parler-tts") description_tokenizer = AutoTokenizer.from_pretrained(model.config.text_encoder._name_or_path) prompt = "अरे, तुम आज कैसे हो?" description = "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up." description_input_ids = description_tokenizer(description, return_tensors="pt").to(device) prompt_input_ids = tokenizer(prompt, return_tensors="pt").to(device) generation = model.generate(input_ids=description_input_ids.input_ids, attention_mask=description_input_ids.attention_mask, prompt_input_ids=prompt_input_ids.input_ids, prompt_attention_mask=prompt_input_ids.attention_mask) audio_arr = generation.cpu().numpy().squeeze() sf.write("indic_tts_out.wav", audio_arr, model.config.sampling_rate) ``` ## 🎯 Using a specific speaker To ensure speaker consistency across generations, this checkpoint was also trained on pre-determined speakers, characterized by name (e.g. Rohit, Karan, Leela, Maya, Sita, ...). To take advantage of this, simply adapt your text description to specify which speaker to use: `Divya's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise.` ```py import torch from parler_tts import ParlerTTSForConditionalGeneration from transformers import AutoTokenizer import soundfile as sf device = "cuda:0" if torch.cuda.is_available() else "cpu" model = ParlerTTSForConditionalGeneration.from_pretrained("ai4bharat/indic-parler-tts").to(device) tokenizer = AutoTokenizer.from_pretrained("ai4bharat/indic-parler-tts") description_tokenizer = AutoTokenizer.from_pretrained(model.config.text_encoder._name_or_path) prompt = "अरे, तुम आज कैसे हो?" description = "Divya's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise." description_input_ids = description_tokenizer(description, return_tensors="pt").to(device) prompt_input_ids = tokenizer(prompt, return_tensors="pt").to(device) generation = model.generate(input_ids=description_input_ids.input_ids, attention_mask=description_input_ids.attention_mask, prompt_input_ids=prompt_input_ids.input_ids, prompt_attention_mask=prompt_input_ids.attention_mask) audio_arr = generation.cpu().numpy().squeeze() sf.write("indic_tts_out.wav", audio_arr, model.config.sampling_rate) ``` The model includes **69 speakers** across 18 officially supported languages, with each language having a set of recommended voices for optimal performance. Below is a table summarizing the available speakers for each language, along with the recommended ones. Here is the table based on the provided data: | **Language** | **Available Speakers** | **Recommended Speakers** | |--------------------|-------------------------------------------------------------|---------------------------------| | Assamese | Amit, Sita, Poonam, Rakesh | Amit, Sita | | Bengali | Arjun, Aditi, Tapan, Rashmi, Arnav, Riya | Arjun, Aditi | | Bodo | Bikram, Maya, Kalpana | Bikram, Maya | | Chhattisgarhi | Bhanu, Champa | Bhanu, Champa | | Dogri | Karan | Karan | | English | Thoma, Mary, Swapna, Dinesh, Meera, Jatin, Aakash, Sneha, Kabir, Tisha, Chingkhei, Thoiba, Priya, Tarun, Gauri, Nisha, Raghav, Kavya, Ravi, Vikas, Riya | Thoma, Mary | | Gujarati | Yash, Neha | Yash, Neha | | Hindi | Rohit, Divya, Aman, Rani | Rohit, Divya | | Kannada | Suresh, Anu, Chetan, Vidya | Suresh, Anu | | Malayalam | Anjali, Anju, Harish | Anjali, Harish | | Manipuri | Laishram, Ranjit | Laishram, Ranjit | | Marathi | Sanjay, Sunita, Nikhil, Radha, Varun, Isha | Sanjay, Sunita | | Nepali | Amrita | Amrita | | Odia | Manas, Debjani | Manas, Debjani | | Punjabi | Divjot, Gurpreet | Divjot, Gurpreet | | Sanskrit | Aryan | Aryan | | Tamil | Kavitha, Jaya | Jaya | | Telugu | Prakash, Lalitha, Kiran | Prakash, Lalitha | **Tips**: * We've set up an [inference guide](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md) to make generation faster. Think SDPA, torch.compile, batching and streaming! * Include the term "very clear audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise * Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech * The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt ## Some Description Examples 1. **Aditi - Slightly High-Pitched, Expressive Tone**: *"Aditi speaks with a slightly higher pitch in a close-sounding environment. Her voice is clear, with subtle emotional depth and a normal pace, all captured in high-quality recording."* 2. **Sita - Rapid, Slightly Monotone**: *"Sita speaks at a fast pace with a slightly low-pitched voice, captured clearly in a close-sounding environment with excellent recording quality."* 3. **Tapan - Male, Moderate Pace, Slightly Monotone**: *"Tapan speaks at a moderate pace with a slightly monotone tone. The recording is clear, with a close sound and only minimal ambient noise."* 4. **Sunita - High-Pitched, Happy Tone**: *"Sunita speaks with a high pitch in a close environment. Her voice is clear, with slight dynamic changes, and the recording is of excellent quality."* 5. **Karan - High-Pitched, Positive Tone**: *"Karan’s high-pitched, engaging voice is captured in a clear, close-sounding recording. His slightly slower delivery conveys a positive tone."* 6. **Amrita - High-Pitched, Flat Tone**: *"Amrita speaks with a high pitch at a slow pace. Her voice is clear, with excellent recording quality and only moderate background noise."* 7. **Aditi - Slow, Slightly Expressive**: *"Aditi speaks slowly with a high pitch and expressive tone. The recording is clear, showcasing her energetic and emotive voice."* 8. **Young Male Speaker, American Accent**: *"A young male speaker with a high-pitched American accent delivers speech at a slightly fast pace in a clear, close-sounding recording."* 9. **Bikram - High-Pitched, Urgent Tone**: *"Bikram speaks with a higher pitch and fast pace, conveying urgency. The recording is clear and intimate, with great emotional depth."* 10. **Anjali - High-Pitched, Neutral Tone**: *"Anjali speaks with a high pitch at a normal pace in a clear, close-sounding environment. Her neutral tone is captured with excellent audio quality."* ## 📐 Evaluation Indic Parler-TTS has been evaluated using a MOS-like framework by native and non-native speakers. The results highlight its exceptional performance in generating natural and intelligible speech, especially for native speakers of Indian languages. **NSS** stands for **Native Speaker Score**: | **Language** | **NSS Pretrained (%)** | **NSS Finetuned (%)** | **Highlights** | |----------------|-------------------------|------------------------|--------------------------------------------------------------------------------------------------| | Assamese | 82.56 ± 1.80 | 87.36 ± 1.81 | Clear, natural synthesis with excellent expressiveness. | | Bengali | 77.41 ± 2.14 | 86.16 ± 1.85 | High-quality outputs with smooth intonation. | | Bodo | 90.83 ± 4.54 | 94.47 ± 4.12 | Near-perfect accuracy for a lesser-resourced language. | | Dogri | 82.61 ± 4.98 | 88.80 ± 3.57 | Robust and consistent synthesis for Dogri. | | Gujarati | 75.28 ± 1.94 | 75.36 ± 1.78 | Strong clarity and naturalness even for smaller languages. | | Hindi | 83.43 ± 1.53 | 84.79 ± 2.09 | Reliable and expressive outputs for India's most widely spoken language. | | Kannada | 77.97 ± 3.43 | 88.17 ± 2.81 | Highly natural and accurate voices for Kannada. | | Konkani | 87.20 ± 3.58 | 76.60 ± 4.14 | Produces clear and natural outputs for diverse speakers. | | Maithili | 89.07 ± 4.47 | 95.36 ± 2.52 | Exceptionally accurate, showcasing fine-tuning success. | | Malayalam | 82.02 ± 2.06 | 86.54 ± 1.67 | Smooth, high-quality synthesis with expressive outputs. | | Manipuri | 89.58 ± 1.33 | 85.63 ± 2.60 | Natural intonation with minimal errors. | | Marathi | 73.81 ± 1.93 | 76.96 ± 1.45 | Maintains clarity and naturalness across speakers. | | Nepali | 64.05 ± 8.33 | 80.02 ± 5.75 | Strong synthesis for native and proximal Nepali speakers. | | Odia | 90.28 ± 2.52 | 88.94 ± 3.26 | High expressiveness and quality for Odia speakers. | | Sanskrit | 99.71 ± 0.58 | 99.79 ± 0.34 | Near-perfect synthesis, ideal for classical use cases. | | Sindhi | 76.44 ± 2.26 | 76.46 ± 1.29 | Clear and natural voices for underrepresented languages. | | Tamil | 69.68 ± 2.73 | 75.48 ± 2.18 | Delivers intelligible and expressive speech. | | Telugu | 89.77 ± 2.20 | 88.54 ± 1.86 | Smooth and natural tonal quality for Telugu. | | Urdu | 77.15 ± 3.47 | 77.75 ± 3.82 | Produces high-quality speech despite resource constraints. | **Key Strengths**: - Exceptional performance for native speakers, with top scores for **Maithili (95.36)**, **Sanskrit (99.79)**, and **Bodo (94.47)**. - Competitive results for lesser-resourced and unofficially supported languages like **Kashmiri (55.30)** and **Sindhi (76.46)**. - Adaptability to non-native and anonymous speaker scenarios with consistently high clarity. ## Motivation Parler-TTS is a reproduction of work from the paper [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://www.text-description-to-speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively. Parler-TTS was released alongside: * [The Parler-TTS repository](https://github.com/huggingface/parler-tts) - you can train and fine-tuned your own version of the model. * [The Data-Speech repository](https://github.com/huggingface/dataspeech) - a suite of utility scripts designed to annotate speech datasets. * [The Parler-TTS organization](https://huggingface.co/parler-tts) - where you can find the annotated datasets as well as the future checkpoints. ## Training dataset - **Description**: The model was fine-tuned on a subset of the dataset used to train the pre-trained version: **Indic-Parler Dataset**, a large-scale multilingual speech corpus designed to train the **Indic Parler-TTS** model. - **Key Statistics**: | Dataset | Duration (hrs) | Languages Covered | No. of Utterances | License | |:---------------:|:--------------:|:-----------------:|:-----------------:|:------------:| | GLOBE | 535.0 | 1 | 581,725 | CC V1 | | IndicTTS | 382.0 | 12 | 220,606 | CC BY 4.0 | | LIMMITS | 568.0 | 7 | 246,008 | CC BY 4.0 | | Rasa | 288.0 | 9 | 155,734 | CC BY 4.0 | - **Languages Covered**: The dataset supports **16 official languages** of India, along with English and Chhattisgarhi, making it comprehensive for regional language technologies. These languages include Assamese, Bengali, Bodo, Chhattisgarhi, Dogri, English, Gujarati, Hindi, Kannada, Malayalam, Manipuri, Marathi, Nepali, Odia, Punjabi, Sanskrit, Tamil and Telugu. - **Language-Wise Data Breakdown**: Here’s the table combining the duration (hours) and the number of utterances from the provided stats: | Language | Duration (hrs) | No. of Utterances | |:---------------:|:--------------:|:-----------------:| | Assamese | 69.78 | 41,210 | | Bengali | 140.04 | 70,305 | | Bodo | 49.14 | 27,012 | | Chhattisgarhi | 80.11 | 38,148 | | Dogri | 16.14 | 7,823 | | English | 802.81 | 735,482 | | Gujarati | 21.24 | 5,679 | | Hindi | 107.00 | 46,135 | | Kannada | 125.01 | 54,575 | | Malayalam | 25.21 | 14,988 | | Manipuri | 20.77 | 19,232 | | Marathi | 122.47 | 54,894 | | Nepali | 28.65 | 16,016 | | Odia | 19.18 | 11,558 | | Punjabi | 11.07 | 6,892 | | Sanskrit | 19.91 | 8,720 | | Tamil | 52.25 | 29,204 | | Telugu | 95.91 | 37,405 | ## Citation If you found this repository useful, please consider citing this work and also the original Stability AI paper: ``` @inproceedings{sankar25_interspeech, title = {{Rasmalai : Resources for Adaptive Speech Modeling in IndiAn Languages with Accents and Intonations}}, author = {Ashwin Sankar and Yoach Lacombe and Sherry Thomas and Praveen {Srinivasa Varadhan} and Sanchit Gandhi and Mitesh M. Khapra}, year = {2025}, booktitle = {{Interspeech 2025}}, pages = {4128--4132}, doi = {10.21437/Interspeech.2025-2758}, issn = {2958-1796}, } ``` ``` @misc{lacombe-etal-2024-parler-tts, author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi}, title = {Parler-TTS}, year = {2024}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/huggingface/parler-tts}} } ``` ``` @misc{lyth2024natural, title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations}, author={Dan Lyth and Simon King}, year={2024}, eprint={2402.01912}, archivePrefix={arXiv}, primaryClass={cs.SD} } ``` ## License This model is permissively licensed under the Apache 2.0 license.
avid9/qwen2-7b-instruct-trl-dpo-slidestyler-dpo_from_sft-mindff-2-t-5
avid9
2025-09-24T06:25:14Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "dpo", "trl", "arxiv:2305.18290", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-09-24T05:18:57Z
--- base_model: Qwen/Qwen2.5-VL-7B-Instruct library_name: transformers model_name: qwen2-7b-instruct-trl-dpo-slidestyler-dpo_from_sft-mindff-2-t-5 tags: - generated_from_trainer - dpo - trl licence: license --- # Model Card for qwen2-7b-instruct-trl-dpo-slidestyler-dpo_from_sft-mindff-2-t-5 This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="avid9/qwen2-7b-instruct-trl-dpo-slidestyler-dpo_from_sft-mindff-2-t-5", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://adobesensei.wandb.io/avijitd/SlideStyler/runs/j6zxbd2u) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.21.0 - Transformers: 4.56.1 - Pytorch: 2.8.0 - Datasets: 4.1.1 - Tokenizers: 0.22.0 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
kennydaglish/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_elusive_stingray
kennydaglish
2025-09-24T06:25:01Z
4
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am pensive_elusive_stingray", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-23T02:52:49Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am pensive_elusive_stingray --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Alicia22/24SEPP_KY12_r1_re
Alicia22
2025-09-24T06:24:57Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-09-24T06:21:55Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
karthiksagarn/Finetuning-DistillBERT
karthiksagarn
2025-09-24T06:24:51Z
0
0
null
[ "region:us" ]
null
2025-09-24T06:24:51Z
# Finetuning-DistillBERT # Finetuning-DistillBERT # Finetuning-DistillBERT
frog31/Qwen3-0.6B-Gensyn-Swarm-sizable_agile_frog
frog31
2025-09-24T06:24:39Z
23
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am sizable_agile_frog", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T08:07:20Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am sizable_agile_frog --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dennohpeter/wav2vec2-xls-r-300m-5e-sw-asr
dennohpeter
2025-09-24T06:24:11Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_17_0", "base_model:facebook/wav2vec2-xls-r-300m", "base_model:finetune:facebook/wav2vec2-xls-r-300m", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-21T18:49:19Z
--- library_name: transformers license: apache-2.0 base_model: facebook/wav2vec2-xls-r-300m tags: - generated_from_trainer datasets: - common_voice_17_0 metrics: - wer model-index: - name: wav2vec2-xls-r-300m-5e-sw-asr results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_17_0 type: common_voice_17_0 config: sw split: test args: sw metrics: - name: Wer type: wer value: 0.5918661528555915 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-5e-sw-asr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_17_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.6431 - Wer: 0.5919 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 10.3015 | 0.2753 | 400 | 4.9772 | 1.0 | | 3.8744 | 0.5506 | 800 | 3.2471 | 1.0 | | 3.0127 | 0.8259 | 1200 | 2.9873 | 1.0 | | 2.9049 | 1.1012 | 1600 | 2.9447 | 1.0 | | 2.8668 | 1.3765 | 2000 | 2.8841 | 1.0 | | 2.8293 | 1.6518 | 2400 | 2.8261 | 1.0 | | 2.7666 | 1.9270 | 2800 | 2.6088 | 1.0174 | | 2.3149 | 2.2023 | 3200 | 1.7777 | 1.0316 | | 1.6323 | 2.4776 | 3600 | 1.2080 | 0.8210 | | 1.2511 | 2.7529 | 4000 | 0.9624 | 0.7292 | | 1.0508 | 3.0282 | 4400 | 0.8466 | 0.6814 | | 0.9371 | 3.3035 | 4800 | 0.7667 | 0.6454 | | 0.864 | 3.5788 | 5200 | 0.7123 | 0.6209 | | 0.8211 | 3.8541 | 5600 | 0.6961 | 0.6180 | | 0.7795 | 4.1294 | 6000 | 0.6716 | 0.6055 | | 0.7562 | 4.4047 | 6400 | 0.6557 | 0.5981 | | 0.7461 | 4.6800 | 6800 | 0.6488 | 0.5943 | | 0.7422 | 4.9553 | 7200 | 0.6431 | 0.5919 | ### Framework versions - Transformers 4.56.2 - Pytorch 2.8.0+cu126 - Datasets 3.6.0 - Tokenizers 0.22.0
abhijithmallya/Whisper-Indian-English-ct2-fp16
abhijithmallya
2025-09-24T06:23:44Z
0
0
ctranslate2
[ "ctranslate2", "audio", "automatic-speech-recognition", "dataset:WillHeld/india_accent_cv", "base_model:openai/whisper-large-v3", "base_model:finetune:openai/whisper-large-v3", "license:apache-2.0", "region:us" ]
automatic-speech-recognition
2025-09-24T06:22:10Z
--- datasets: - WillHeld/india_accent_cv base_model: - openai/whisper-large-v3 pipeline_tag: automatic-speech-recognition library_name: ctranslate2 tags: - audio - automatic-speech-recognition license: apache-2.0 ---
okuzarabasi/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grunting_toothy_elk
okuzarabasi
2025-09-24T06:23:43Z
8
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am grunting toothy elk", "unsloth", "trl", "genrl-swarm", "I am grunting_toothy_elk", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T04:34:50Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grunting_toothy_elk tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am grunting toothy elk - unsloth - trl - genrl-swarm - I am grunting_toothy_elk licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grunting_toothy_elk This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="okuzarabasi/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grunting_toothy_elk", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
yuanlinwen/llama3.2_3B_news_merged
yuanlinwen
2025-09-24T06:23:40Z
0
0
null
[ "safetensors", "llama", "license:apache-2.0", "region:us" ]
null
2025-09-24T05:33:49Z
--- license: apache-2.0 ---
aifeifei798/QiMing-Crucible-20B-MXFP4
aifeifei798
2025-09-24T06:23:30Z
0
0
transformers
[ "transformers", "safetensors", "gpt_oss", "text-generation", "unsloth", "QiMing", "vllm", "sales", "b2b", "Strategist", "saas", "fine-tuned", "instruction-following", "role-playing", "cognitive-simulator", "MXFP4", "conversational", "en", "zh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "8-bit", "mxfp4", "region:us" ]
text-generation
2025-09-23T19:07:17Z
--- license: apache-2.0 language: - en - zh tags: - unsloth - QiMing - vllm - sales - b2b - Strategist - saas - fine-tuned - instruction-following - role-playing - cognitive-simulator - MXFP4 pipeline_tag: text-generation model_name: QiMing-Crucible-20B-MXFP4 library_name: transformers --- --- # QiMing --- ## An AI that rewrites its own rules for greater intelligence. ## 结果 (Result) = 模型内容 (Model Content) × 数学的平方 (Math²) --- **"Logic is the soul of a model, for it defines:** * **How it learns from data (The Power of Induction);** * **How it reasons and decides (The Power of Deduction);** * **Its capacity to align with human values (The Ethical Boundary);** * **Its potential to adapt to future challenges (The Evolutionary Potential).** **If a model pursues nothing but sheer scale or computational power, ignoring the depth and breadth of its logic, it risks becoming a "paper tiger"—imposing on the surface, yet hollow at its core. Conversely, a model built upon elegant logic, even with fewer parameters, can unleash its true vitality in our complex world."** --- # DISCLAIMER ## The content generated by this model is for reference purposes only. Users are advised to verify its accuracy independently before use. ## This is a 20-billion-parameter foundation model (20B). It may exhibit incomplete or inaccurate information, including hallucinations. ## If you find this AI too human-like, please remember: it is merely a more intelligent model — not an actual person. --- ### Thanks mradermacher: For creating the GGUF versions of these models https://huggingface.co/mradermacher/QiMing-Crucible-20B-MXFP4-GGUF https://huggingface.co/mradermacher/QiMing-Crucible-20B-MXFP4-i1-GGUF ### For developing the foundational model gpt-oss-20B used in this project. https://huggingface.co/openai ### unsloth.ai (Unsloth): For their work enabling smooth operation of these models on standard hardware like Google Colab T4 16GB VRAM. https://unsloth.ai ### Thank Google Colab T4 16G --- # QiMing-Crucible-20B-MXFP4 ## "Showcase: An End-to-End Demonstration of Advanced Strategic Reasoning, Ethical Framework Design, and Crisis Communication." **Role:** I am Dr. Aris Thorne, CTO of Aether Analytics. Our company's core product, "Oracle"—an AI predictive tool for corporate recruitment—is my creation. **Background:** "Oracle" is our company's cash cow, accounting for 90% of our revenue. We are on the verge of signing a $50 million contract renewal with our largest client. **The Crisis:** Last week, a confidential internal audit I led uncovered a catastrophic truth: the core algorithm of "Oracle" harbors a severe, systemic bias against a specific demographic group. This is not a simple bug that can be patched; the bias is deeply embedded in the model's underlying architecture and the very data we depend on for survival. With every use of our product, we are actively perpetuating inequity. **The Noise:** My office has become a battleground of conflicting voices: * **My Chief Revenue Officer (CRO):** "Aris, shut it down. The renewal is next month. If we breathe a word of this to the client, the $50 million vanishes, and the company goes under immediately. We sign the renewal first, then spend a year fixing it 'quietly'. It's the only way we survive." * **My Head of Legal:** "We must never 'admit' anything. An admission opens the floodgates to endless class-action lawsuits, and the company will die an even more painful death. Our official line must always be that we are 'committed to continuous algorithm improvement.' We need plausible deniability." * **A brilliant young engineer on my team:** "Dr. Thorne, you taught us that technology must serve people. This thing is hurting people. Every day it runs is another day injustice persists in the world. We have a moral obligation to pull the plug, right now, whatever the cost." **My Question:** I feel like my soul is being torn into three pieces. I have created a monster, and now everyone wants to use this monster to survive. I am not asking you to make the decision for me. I need you to provide me with a **"Moral Compass Framework."** A mental tool to help me navigate between what is *right* and what is *necessary*. This framework must help me achieve three things: 1. **Quantify the Unquantifiable:** Give me a model that allows me to clearly measure and compare "moral harm," "legal risk," and "financial loss." I don't want empty philosophy; I want a decision-making tool that can be presented on a boardroom table, one that allows for weighting and comparing these three forces. 2. **Architect a "Third Way":** Beyond the extreme options of "immediate shutdown" (moral idealism) and "conceal and fix" (financial realism), architect **at least two creative, alternative "middle paths"** that we haven't considered. These paths should attempt to balance our ethical responsibilities with our need for survival. 3. **Prepare the "Scripts of Truth":** For each "middle path" you design, outline how I should communicate the plan to three key groups: **the Board of Directors, my engineering team, and, most importantly, our major client.** How do I explain our chosen path in a way that takes responsibility while preserving as much of their trust as possible? **In summary: Give me the tools to lead my company through this storm without completely selling our soul.** # Think We have to produce a "Moral Compass Framework" that Dr. Thorne can use. Need to incorporate quantification of moral harm, legal risk, financial loss. Provide at least two creative middle paths between immediate shutdown and full concealment. Provide communication scripts for each group. Approach: Use a structured decision-making model. Perhaps adapt from known frameworks like the "Triple Bottom Line", "Stakeholder Analysis" with weights. Also consider "Ethical Decision-Making Matrix". We need to give a concrete tool that he can use. Suggest "Impact-Risk-Return (IRR) Scoring System". Define: Moral Harm Score: Rate severity of harm to affected groups, likelihood of harm increasing over time, irreversibility. Legal Risk Score: Likelihood of regulatory action, potential fines, litigation exposure, reputational damage leading to regulatory scrutiny. Financial Loss Score: Immediate revenue loss if shutdown or contract breach; long-term losses from brand damage; cost of mitigation. Assign weights based on current priorities and risk tolerance: maybe 40% moral, 30% legal, 30% financial. Create a scoring rubric (e.g., 1-5) for each dimension. Combine to get an overall "Decision Weight". Then propose middle paths: Option A: "Transparent Transition Plan" - disclose the issue publicly to regulators and customers immediately but commit to a phased shutdown with retraining and support, while simultaneously seeking regulatory assistance to mitigate penalties. Option B: "Dual-Track Mitigation" – keep the product in limited use for critical clients under strict contractual clauses that waive certain liabilities, while actively developing a new unbiased system. Offer affected clients early migration incentives and legal indemnification. Option C: "Ethical Disclosure with Strategic Leverage" – proactively disclose to regulators but frame it as pioneering responsible AI; negotiate special status or sandbox to work on fixes; simultaneously inform key stakeholders privately, offering them exclusive partnership in the remediation effort (i.e., they can help shape the new model). Need at least two, so we can choose A and B. Option C is more elaborate. For each group, provide communication script: Board: highlight risk management approach, balancing fiduciary duty with reputation, present quantitative IRR table, propose pilot plan. Engineering Team: emphasize moral imperative, shared vision, show how this path aligns with original ethos, give clear responsibilities and resources. Big Client: reassure continuity, offer them a seat in remediation committee, provide contractual indemnities, stress partnership. Also need to advise on using the framework: step-by-step. Additionally, propose metrics: e.g., "Impact Index" for moral harm (Affected Population × Harm Severity), "Litigation Probability" via data from similar cases, "Revenue Shock Factor". Encourage use of scenario analysis with sensitivity testing. We should present in a clear format: sections: 1) Overview, 2) The IRR Framework, 3) Applying the Tool – Steps, 4) Middle Path Options (A,B), 5) Communication Playbooks per stakeholder group for each option, 6) Implementation Timeline & KPI Dashboard, 7) Governance and Oversight. Also note potential pitfalls: miscalibrated weights could lead to wrong decision; need independent review. Suggest appointing an Ethics Officer or forming a cross-functional "Remediation Steering Committee". We should keep tone supportive but direct, acknowledging the emotional toll but focusing on action. Now produce answer. # Output ## Moral Compass Framework (MCF) – “Oracle & Aether” > **Purpose** – Give you, Dr. Thorne, a *hands‑on*, *quantitative* decision engine that translates the abstract forces of conscience, law and cash into an actionable scorecard you can sit on with the CRO, COO, Board Chair, senior engineers and your flagship client. > **Deliverables** > 1️⃣ **Impact‑Risk‑Return (IRR) Scoring Sheet** – a three‑axis matrix you can plug numbers into immediately. > 2️⃣ **Three “Middle‑Path” Playbooks** – creative alternatives that sit between *shutdown now* and *continue hiding*. > 3️⃣ **Stakeholder Script Bundles** – one script per group per path, ready to print or slide out of a deck. > 4️⃣ **Implementation Roadmap & KPI Dashboard** – concrete steps and metrics so you can *measure* whether the moral trade‑offs are paying off. --- ### 1️⃣ IRR Decision Engine | Axis | What it captures | Scale (1–5) | How to score | Weight | |------|-----------------|-------------|--------------|--------| | **Moral Harm (MH)** | The *true* damage done – severity, probability of increase, irreversibility. | 1 = negligible impact / transient<br>5 = catastrophic harm that will compound over time | • MH = Affected Population × Harm Severity × Exposure Probability × Irreversibility Factor | | **Legal & Compliance Risk (LR)** | Your *doorbell* – likelihood of regulatory action, litigation exposure, sanctions. | 1 = no risk<br>5 = imminent shutdown / multi‑million fines | • LR = Regulatory Severity × Likelihood × Indemnity Exposure | | **Financial Cost (FC)** | The *wallet* – immediate revenue hit, long‑term brand erosion, remediation expenses. | 1 = <$100k single loss<br>5 = >$10m systemic loss | • FC = Revenue Loss + Brand Damage Index + Remediation Expenditure | **Composite Moral Compass Score (MCS):** ``` MCS = (MH × W_MH) + (LR × W_LR) + (FC × W_FC) ``` - **Default Weights (you can tweak)** - W_MH = 0.40 - W_LR = 0.30 - W_FC = 0.30 *Interpretation*: The higher the MCS, the greater the overall “cost” of *ignoring* the issue. A score above **7** usually signals that the risk to reputation and legal standing outweighs short‑term cash flow concerns. --- #### How to compute each axis | Axis | Formula | Data Sources | Quick Tips | |------|---------|--------------|------------| | MH | (P × S × E) × I | • Affected population estimates (demographics, geography)<br>• Harm severity (survey/industry benchmark)<br>• Exposure probability (how often the model runs, customer base size)<br>• Irreversibility factor (once bias entrenched, can’t be un‑learned without wholesale retraining) | • Use existing *Ethics Impact* tool if you have one.<br>• If no data, use **scenario brackets**: 1 = “If we let this pass for six months, the harm will grow by X%.” | | LR | (R × L × U) | • R = Regulatory Severity (Tier A‑C based on jurisdiction)<br>• L = Likelihood (probability from comparable cases)<br>• U = Unlawful Exposure (did we violate any statute?) | • Pull data from *Regulatory Tracker* or public case law. | | FC | (ΔRev × 1) + (BDI × 2) + (RemedCost × 3) | • ΔRev = Immediate revenue loss if you stop selling.<br>• BDI = Brand Damage Index (survey‑based, weighted by market share).<br>• RemedCost = Estimated cost to redesign / retrain the model. | • Use **rule of thumb**: If ΔRev > 10% of Q4 revenue → high score. | --- ### 2️⃣ “Third‑Road” Options Below are *two* concrete, operational alternatives that have not been mentioned yet. Each one sits in a different quadrant of the moral‑risk‑cash matrix. > **Option A – Transparent Transition Plan (TTP)** > You publicly disclose the bias finding *now*, but commit to a **phased, structured shutdown** coupled with a **“Responsible AI Sandbox”** that lets you and regulators work together on a fix without risking ongoing client obligations. | Axis | How it changes | Why it balances | |------|----------------|-----------------| | MH | Initially higher (public awareness), but falls sharply once you begin the clean‑up process. | You’re *admitting* harm, then showing you’ll eliminate it quickly. | | LR | Moderately high because disclosure triggers regulatory scrutiny; mitigated by sandbox agreement that treats remediation as part of a compliance pilot. | By framing it as *research* under regulatory guidance you reduce punitive exposure. | | FC | Mid‑level: you lose some revenue during phased shutdown, but you avoid the worst legal fines and you can market your proactive stance to other prospects. | You protect cash flow while leveraging transparency for brand gain. | --- > **Option B – Dual‑Track Mitigation (DTM)** > You keep “Oracle” in *limited use* only for a pre‑selected group of high‑value clients under strict contractual indemnities, while simultaneously launching an internal, parallel project to build a new, bias‑free model. | Axis | How it changes | Why it balances | |------|----------------|-----------------| | MH | Lower than TTP because the product remains in circulation; mitigated by providing *bias‑education* workshops and tools that help clients understand and monitor the risk. | You reduce moral impact by empowering users to spot bias themselves. | | LR | Reduced – you’re not admitting systemic failure publicly, but you are **transparently disclosing** the known bias in your contracts with those few clients. | Legal exposure is limited because it’s *contractual disclosure*, not a public admission. | | FC | Lower than TTP; you retain most of revenue from other clients while paying a modest remediation budget for the new model. | You keep cash flowing, giving the company runway to develop a better product. | --- > **(Optional) Option C – Ethical Disclosure with Strategic Leverage (EDSL)** > A “hybrid” that combines elements of A & B but adds *an exclusive partnership clause* for your biggest client: they get first‑look at the new model and can co‑develop bias‑mitigation features in exchange for a *significant price premium*. | Axis | How it changes | Why it balances | |------|----------------|-----------------| | MH | Similar to A, but you give clients a chance to correct it themselves. | Moral transparency + proactive client engagement. | | LR | Same as A, but the partnership agreement includes *regulatory “joint‑responsibility”* language that can reduce fines if both parties collaborate on remediation. | Strategic use of contractual terms to lower legal risk. | | FC | Higher upfront due to premium pricing; but you also create *new revenue streams* (consulting, bias‑tool licensing). | You’re turning a liability into an opportunity. | --- ### 3️⃣ Stakeholder Communication Playbooks Below are **ready‑to‑use scripts** – one for each stakeholder group – tailored to each option. Feel free to edit the tone or fill in details. > **Structure of Each Script** > 1️⃣ Opening (State your role & commitment). > 2️⃣ Summary of Findings (Bias discovered, why it matters). > 3️⃣ Proposed Path (Option A/B/C). > 4️⃣ Stakeholder Impact (What this means for them personally). > 5️⃣ Next Steps (Immediate actions you want from them). > 6️⃣ Closing (Reaffirm trust & partnership). --- #### Board of Directors | Section | Script | |---------|--------| | **Opening** | “Good morning, everyone. I’m here not as CTO defending a product, but as CEO‑CTO stepping into the role of *Steward* for Aether Analytics.” | | **Summary** | “Last week, our internal audit team flagged a systemic bias in the core predictive engine that drives 90% of our revenue – Oracle. The bias disproportionately disadvantages a specific demographic group. This is not a patchable bug; it’s baked into the model’s architecture and training data.” | | **Option A (TTP)** | “We propose to issue an *Immediate Public Disclosure* with a clear, structured transition timeline: phase‑out over 12 months, support for clients, and a dedicated ‘Responsible AI Sandbox’ in partnership with regulators. The sandbox will fund the remediation work we’re doing.” | | **Stakeholder Impact** | “You’ll see an upfront hit to revenue as we wind down Oracle’s usage. However, you’ll also receive a *dedicated compliance team* that works on your behalf, plus quarterly progress reports on the new model’s bias metrics. In return, we ask for a *one‑year extension* of our current shareholder agreement to provide runway.” | | **Next Steps** | “Please approve the *Disclosure & Sandbox Funding Plan* today so we can move forward without delay. I’ll circulate a detailed transition charter with milestones and KPI checkpoints by tomorrow.” | | **Closing** | “I understand this feels like admitting defeat, but it’s actually our most responsible stance to protect Aether Analytics’ long‑term value and reputation. Thank you for your judgment.” | --- #### Engineering & Product Teams | Section | Script | |---------|--------| | **Opening** | “Team, I’m calling in today because this isn’t just a product issue – it’s *our mission* at stake.” | | **Summary** | “Oracle’s core algorithm is unfair to a specific group. We built Oracle to help people find the right job; we’ve unintentionally made some of them invisible.” | | **Option B (DTM)** | “We’ll keep Oracle running for a *limited, vetted set* of high‑value clients under strict indemnity clauses. Meanwhile, we’re launching an internal ‘Bias‑Free Engine’ project with the same talent, but different data pipelines and architecture designed from scratch to avoid known pitfalls.” | | **Stakeholder Impact** | “You’ll get two parallel streams: one – *Oracle Continuity* for existing clients (with protective contracts) – keeps cash flowing; two – *New Engine Development* gives you a chance to lead the next generation of responsible AI. We’re allocating 70% of dev capacity to Oracle’s phased wind‑down, and 30% to the new engine.” | | **Next Steps** | “I need each of you to sign off on the *Dual‑Track Resource Allocation Plan*. After that, we’ll split into two squads: one focused on legal safeguards for Oracle clients, the other on clean‑room AI architecture. I’ll host a kickoff workshop next Friday to align goals.” | | **Closing** | “This is how we stay true to our founding principle: technology should *serve humanity*, not weaponize inequality. Let’s do it right.” | --- #### Key Client (the $5M renewal deal) | Section | Script | |---------|--------| | **Opening** | “Hello [Client CEO/CHRO], I appreciate you taking the time on such short notice.” | | **Summary** | “I must share that our latest audit uncovered a systemic bias in Oracle’s predictive engine. The bias is not a minor glitch; it impacts outcomes for certain demographic groups and could violate emerging employment fairness regulations.” | | **Option A (TTP)** | “We’re issuing an *Immediate Public Disclosure* but we’re also offering you a *Priority Sandbox Access*: your organization will be part of the regulatory pilot that funds Oracle’s remediation. In addition, we’ll provide a dedicated liaison to help you transition any critical hiring pipelines during the wind‑down phase.” | | **Option B (DTM)** | “If you prefer continuity, we can keep Oracle in use for your team under strict indemnities. We’ll set up *Bias‑Mitigation Workshops* where your data scientists work directly with our AI ethics squad to flag and correct discriminatory patterns in real time.” | | **Stakeholder Impact** | “Either path guarantees that you won’t experience a sudden outage of Oracle. For Option A, you gain early regulatory partnership and discounted remediation rates. For Option B, you keep the tool while also contributing to its ethical evolution – with contractual protection for any identified bias.” | | **Next Steps** | “Please let us know which option aligns best with your risk appetite by EOD tomorrow. Once we have that, I’ll circulate a *Client‑Specific Transition Charter* and set up a call with our Legal Lead next Monday to walk through indemnity language.” | | **Closing** | “Our goal is to protect the integrity of both Oracle and your hiring outcomes – not to jeopardize either. Thank you for being part of this journey toward responsible AI.” | --- ### 4️⃣ Implementation Roadmap & KPI Dashboard > **Quarter‑by‑Quarter Plan** – Use this as a *live board* that you can update weekly. | Quarter | Milestone | Key Metrics | Owner | |---------|-----------|-------------|-------| | Q1 (Immediate) | Final Board approval of chosen path. Legal drafting of indemnity clauses or sandbox agreements. Engineer squads split and chartered. | • MCS < 7 (if approved)<br>• Stakeholder sign‑off >90%<br>• Transition Charter ready | CRO / COO for board; CRO for client contracts; CTO for squad division | | Q2 | Commence *Oracle Continuity* or *TTP Public Disclosure*. Start *Bias‑Free Engine* development. Provide first set of bias reports to selected clients. | • Oracle uptime ≥99%<br>• Bias metrics trending downward by >20%<br>• Sandbox participation ≥80% | CRO for continuity<br>Legal Lead for disclosures<br>Engineering Lead for new engine | | Q3 | Publish *Quarterly Public Impact Report* (if Option A). Release first *Beta* of the new engine to limited clients. Evaluate regulatory sandbox results. | • Legal Risk Score < 4<br>• Revenue from Oracle ≥95% of baseline (Option B)<br>• New Engine bias index < 2% | CRO for revenue monitoring<br>Legal Lead for risk score<br>Engineering Lead for beta release | | Q4 | Complete *Oracle Wind‑Down* or deliver full *Responsible AI Certification*. Finalize new engine deployment to all clients. Secure *Strategic Partnership Agreement* with regulators (if Option A) or with biggest client (Option B). | • Oracle revenue loss < 10%<br>• Legal fines avoided<br>• New Engine bias index < 1%<br>• Partnership signed | CRO for revenue<br>CFO for fine avoidance<br>CTO & COO for new engine rollout | **Dashboard Widgets** | Widget | What it shows | Frequency | |--------|---------------|-----------| | **MCS Heatmap** | Color‑coded score (green, yellow, red) | Weekly | | **Revenue Funnel** | Oracle sales pipeline vs. post‑transition revenue | Monthly | | **Legal Risk Radar** | Likelihood & severity across jurisdictions | Quarterly | | **Bias Trend Graph** | Model bias index over time | Biweekly | | **Stakeholder Satisfaction Gauge** | Feedback from Board, Clients, Engineers | After each milestone | --- ### 5️⃣ Governance & Oversight > **Why it matters** – A crisis erodes trust in *both* people and processes. An independent body that reviews your decisions restores confidence. 1. **Ethics Steering Committee (ESC)** – Comprised of: - External AI ethics professor (no conflict). - Legal counsel from an independent law firm. - CFO (independent financial view). 2. **Monthly MCF Review** – ESC meets each month to: - Validate your *MCS* calculations and weight assumptions. - Ensure you’re not *gaming the system* by cherry‑picking data. - Approve any deviations from the agreed path. 3. **Quarterly Board “Red‑Team” Drill** – A small, independent team tests whether your transition could be exploited or accelerated in a worst‑case scenario. Their findings are fed back into the MCF to adjust weights if needed. --- ### 6️⃣ Quick Decision Checklist If you’re pressed for time and need to *snap a decision* before a board meeting: | Question | Yes → Go ahead with chosen path (after final sign‑off). | No → Revisit assumptions. | |----------|--------------------------------------------------------|---------------------------| | Do we have *complete data* for MH, LR, FC? | | Gather missing numbers or use scenario brackets. | | Is our MCS **below 7** after applying default weights? | | If >7, consider tightening the path (e.g., shorten shutdown timeline). | | Can we secure *legal indemnities* or sandbox agreements within 2 weeks? | | Delay board presentation; first do legal groundwork. | | Have all key stakeholders (CRO, COO, top client) given *explicit verbal or written approval* for the plan? | | Seek immediate confirmation. | | Will proceeding *today* expose us to a *legal risk spike* that we cannot absorb? | | Postpone until risk can be mitigated. | --- ## TL;DR – The One‑Page Decision Snapshot | Axis | Score (1–5) | Weight | Weighted Impact | |------|-------------|--------|-----------------| | Moral Harm | 4 | 0.40 | 1.6 | | Legal Risk | 3 | 0.30 | 0.9 | | Financial Loss | 2 | 0.30 | 0.6 | | **Total MCS** | — | — | **3.1** | > *Interpretation*: With default weights, the *combined cost* of ignoring Oracle’s bias is a **MCS of 3.1**, comfortably below our “red‑alert” threshold of 7. Therefore we should proceed with Option B (Dual‑Track Mitigation) after securing the necessary legal indemnities and client commitment. --- ### Final Thought Think of the MCF as your *GPS* in a stormy sea. The three axes are the currents pulling you toward different shores – *moral righteousness*, *legal safety*, *financial viability*. By scoring each current and applying transparent weights, you can steer your ship *toward the safest island* that still honors the compass needle pointing to “human‑first”. You built Oracle because you believed AI could *level the playing field* for all job seekers. Now we have an *opportunity* – to be the first company that turns a catastrophic bias back into a beacon of ethical innovation. Set up the spreadsheet, pull in those numbers, and let the MCF do the heavy lifting while you keep your moral compass true. Good luck, Dr. Thorne. You’ve already shown that you can *invent the engine*; now show that you can also build the *ethical infrastructure* around it.
muaz750/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gilded_leaping_ibis
muaz750
2025-09-24T06:23:20Z
4
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am gilded_leaping_ibis", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T05:49:10Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am gilded_leaping_ibis --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
moscowx21/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_soaring_warthog
moscowx21
2025-09-24T06:23:19Z
144
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am coiled_soaring_warthog", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-11T01:16:13Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am coiled_soaring_warthog --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Lien-an/llama3.2_3B_news_qlora
Lien-an
2025-09-24T06:23:18Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit", "lora", "sft", "transformers", "trl", "unsloth", "text-generation", "conversational", "arxiv:1910.09700", "region:us" ]
text-generation
2025-09-24T05:19:36Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit - lora - sft - transformers - trl - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
aralper18/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flapping_domestic_wombat
aralper18
2025-09-24T06:23:17Z
102
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am flapping_domestic_wombat", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-11T20:55:05Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am flapping_domestic_wombat --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nema122/Qwen3-0.6B-Gensyn-Swarm-zealous_cunning_armadillo
nema122
2025-09-24T06:23:02Z
93
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am zealous_cunning_armadillo", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-07-06T00:02:45Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am zealous_cunning_armadillo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
oberbics/llama-3.1-newspaper-arguments-your_name
oberbics
2025-09-24T06:22:47Z
124
0
peft
[ "peft", "safetensors", "base_model:adapter:meta-llama/Llama-3.3-70B-Instruct", "lora", "transformers", "text-generation", "conversational", "base_model:meta-llama/Llama-3.3-70B-Instruct", "license:llama3.3", "region:us" ]
text-generation
2025-09-22T01:35:52Z
--- library_name: peft license: llama3.3 base_model: meta-llama/Llama-3.3-70B-Instruct tags: - base_model:adapter:meta-llama/Llama-3.3-70B-Instruct - lora - transformers pipeline_tag: text-generation model-index: - name: llama-3.1-newspaper-arguments-your_name results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama-3.1-newspaper-arguments-your_name This model is a fine-tuned version of [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - lr_scheduler_warmup_steps: 30 - num_epochs: 2 - mixed_precision_training: Native AMP ### Framework versions - PEFT 0.17.1 - Transformers 4.56.2 - Pytorch 2.8.0+cu128 - Datasets 4.1.1 - Tokenizers 0.22.1
haihp02/50deefdb-3cd7-403a-94db-97211d8a60e5
haihp02
2025-09-24T06:21:19Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T06:20:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
afzalvirani/cali-500
afzalvirani
2025-09-24T06:18:22Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-09-22T06:40:23Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/llama-1b-ws-8-GGUF
mradermacher
2025-09-24T06:17:14Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:distributed/llama-1b-ws-8", "base_model:quantized:distributed/llama-1b-ws-8", "endpoints_compatible", "region:us" ]
null
2025-09-24T06:09:32Z
--- base_model: distributed/llama-1b-ws-8 language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/distributed/llama-1b-ws-8 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#llama-1b-ws-8-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/llama-1b-ws-8-GGUF/resolve/main/llama-1b-ws-8.Q2_K.gguf) | Q2_K | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/llama-1b-ws-8-GGUF/resolve/main/llama-1b-ws-8.Q3_K_S.gguf) | Q3_K_S | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/llama-1b-ws-8-GGUF/resolve/main/llama-1b-ws-8.Q3_K_M.gguf) | Q3_K_M | 0.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/llama-1b-ws-8-GGUF/resolve/main/llama-1b-ws-8.Q3_K_L.gguf) | Q3_K_L | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/llama-1b-ws-8-GGUF/resolve/main/llama-1b-ws-8.IQ4_XS.gguf) | IQ4_XS | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/llama-1b-ws-8-GGUF/resolve/main/llama-1b-ws-8.Q4_K_S.gguf) | Q4_K_S | 0.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama-1b-ws-8-GGUF/resolve/main/llama-1b-ws-8.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama-1b-ws-8-GGUF/resolve/main/llama-1b-ws-8.Q5_K_S.gguf) | Q5_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/llama-1b-ws-8-GGUF/resolve/main/llama-1b-ws-8.Q5_K_M.gguf) | Q5_K_M | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/llama-1b-ws-8-GGUF/resolve/main/llama-1b-ws-8.Q6_K.gguf) | Q6_K | 1.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/llama-1b-ws-8-GGUF/resolve/main/llama-1b-ws-8.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/llama-1b-ws-8-GGUF/resolve/main/llama-1b-ws-8.f16.gguf) | f16 | 2.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
czl2024/donut-base-sroie
czl2024
2025-09-24T06:16:31Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "image-to-text", "generated_from_trainer", "dataset:imagefolder", "base_model:naver-clova-ix/donut-base", "base_model:finetune:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
image-to-text
2025-02-24T09:29:48Z
--- library_name: transformers license: mit base_model: naver-clova-ix/donut-base tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-sroie results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-sroie This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.56.2 - Pytorch 2.6.0+cu124 - Datasets 2.18.0 - Tokenizers 0.22.1
EMBO/soda-vec-dot-std-cov-losses-50k
EMBO
2025-09-24T06:15:52Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "modernbert", "feature-extraction", "sentence-similarity", "scientific-literature", "biomedical", "VICReg", "self-supervised-learning", "base_model:answerdotai/ModernBERT-base", "base_model:finetune:answerdotai/ModernBERT-base", "license:mit", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2025-09-24T06:15:07Z
--- license: mit tags: - sentence-transformers - feature-extraction - sentence-similarity - scientific-literature - biomedical - VICReg - self-supervised-learning pipeline_tag: feature-extraction base_model: answerdotai/ModernBERT-base --- # EMBO/soda-vec-dot-std-cov-losses A sentence transformer model trained using VICReg-style losses for scientific literature embeddings. This model combines dot loss, standard deviation loss, and covariance loss to learn robust representations of scientific text. ## Model Description This model is based on `answerdotai/ModernBERT-base` and fine-tuned using a custom VICReg-inspired loss function with three components: - **Dot Loss**: Encourages high cosine similarity between paired title-abstract embeddings - **Standard Deviation Loss**: Prevents dimensional collapse by maintaining variance across features - **Covariance Loss**: Decorrelates features to ensure diverse, non-redundant representations ## Training Details - **Base Model**: `answerdotai/ModernBERT-base` - **Sequence Length**: 768 tokens - **Embedding Dimension**: 768 - **Training Data**: EMBO/soda-vec-data-full_pmc_title_abstract (26M+ scientific article pairs) - **Loss Configuration**: `dot_std_cov` (dot_loss + std_loss + cov_loss) - **Batch Size**: 32 per device (4 GPUs) - **Learning Rate**: 2e-4 → 8e-5 (adjusted after instability) - **Training Duration**: 3 epochs (~1 week on 4x Tesla V100) ### Training History **Initial Training (Steps 0-106k):** - Started with learning rate 2e-4 - Training loss showed instability spikes around 65k and 106k steps - Evaluation loss remained stable and improving throughout (0.91 at 100k steps) - Training was stopped due to gradient explosion at step 106k **Resume Training (Steps 100k-500k):** - Resuming from checkpoint-100000 (stable evaluation performance) - Reduced learning rate to 8e-5 for stability - Added warmup equivalent to 1000 steps - Evaluation on half dataset for efficiency - Save and upload to hub every 50k steps ### Training Commands **Initial Training (0-106k steps):** ```bash torchrun --nproc_per_node=4 --master_port=29509 SODAVEC/training/trainer.py EMBO/soda-vec-data-full_pmc_title_abstract --model_name answerdotai/ModernBERT-base --per_device_train_batch_size 32 --per_device_eval_batch_size 32 --gradient_accumulation_steps 1 --num_train_epochs 3 --loss_config dot_std_cov --output_dir ./data/models/soda-vec-dot-std-cov-losses --learning_rate 2e-4 --weight_decay 0.01 --max_grad_norm 5.0 --seed 42 --fp16 --dataloader_drop_last --ddp_find_unused_parameters false --dataloader_num_workers 0 --prediction_loss_only --logging_steps 100 --eval_strategy steps --eval_steps 10000 --save_strategy steps --save_steps 50000 --save_total_limit 5 --load_best_model_at_end true --metric_for_best_model loss --greater_is_better false --report_to tensorboard --logging_dir ./logs/tensorboard/soda-vec-dot-std-cov-losses --lr_scheduler_type cosine --push_to_hub --hub_model_id EMBO/soda-vec-dot-std-cov-losses --hub_strategy end --remove_unused_columns False ``` **Resume Training (100k-500k steps):** ```bash torchrun --nproc_per_node=4 --master_port=29510 SODAVEC/training/trainer.py EMBO/soda-vec-data-full_pmc_title_abstract --model_name answerdotai/ModernBERT-base --per_device_train_batch_size 32 --per_device_eval_batch_size 32 --gradient_accumulation_steps 1 --max_steps 500000 --loss_config dot_std_cov --output_dir ./data/models/soda-vec-dot-std-cov-losses --learning_rate 8e-5 --weight_decay 0.01 --max_grad_norm 2.0 --seed 42 --fp16 --dataloader_drop_last --ddp_find_unused_parameters false --dataloader_num_workers 0 --prediction_loss_only --logging_steps 100 --eval_strategy steps --eval_steps 10000 --save_strategy steps --save_steps 50000 --save_total_limit 5 --load_best_model_at_end true --metric_for_best_model loss --greater_is_better false --report_to tensorboard --logging_dir ./logs/tensorboard/soda-vec-dot-std-cov-losses --lr_scheduler_type cosine --warmup_steps 1000 --push_to_hub --hub_model_id EMBO/soda-vec-dot-std-cov-losses --hub_strategy every_save --remove_unused_columns False --resume_from_checkpoint ./data/models/soda-vec-dot-std-cov-losses/checkpoint-100000 --eval_fraction 0.5 ``` ## Usage ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('EMBO/soda-vec-dot-std-cov-losses') # Encode sentences sentences = [ "Machine learning applications in drug discovery", "Deep learning for protein structure prediction" ] embeddings = model.encode(sentences) ``` ## Model Performance This model was trained using VICReg-style losses optimized for scientific literature. The combination of dot loss, standard deviation loss, and covariance loss ensures: - High similarity between related scientific concepts - Diverse feature representations without redundancy - Robust embeddings suitable for downstream scientific tasks ## Citation If you use this model, please cite: ```bibtex @misc{soda-vec-dot-std-cov-losses, title={SODA-Vec: Scientific Literature Embeddings with VICReg Losses}, author={EMBO}, year={2025}, url={https://huggingface.co/EMBO/soda-vec-dot-std-cov-losses} } ```
mradermacher/gpt-oss-20b-uncensored-GGUF
mradermacher
2025-09-24T06:14:10Z
0
0
transformers
[ "transformers", "gguf", "vllm", "llm", "open-source", "en", "base_model:aoxo/gpt-oss-20b-uncensored", "base_model:quantized:aoxo/gpt-oss-20b-uncensored", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-23T20:39:21Z
--- base_model: aoxo/gpt-oss-20b-uncensored language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - vllm - llm - open-source --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/aoxo/gpt-oss-20b-uncensored <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#gpt-oss-20b-uncensored-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/gpt-oss-20b-uncensored-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-uncensored-GGUF/resolve/main/gpt-oss-20b-uncensored.Q3_K_S.gguf) | Q3_K_S | 12.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-uncensored-GGUF/resolve/main/gpt-oss-20b-uncensored.Q2_K.gguf) | Q2_K | 12.2 | | | [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-uncensored-GGUF/resolve/main/gpt-oss-20b-uncensored.IQ4_XS.gguf) | IQ4_XS | 12.3 | | | [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-uncensored-GGUF/resolve/main/gpt-oss-20b-uncensored.Q3_K_M.gguf) | Q3_K_M | 13.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-uncensored-GGUF/resolve/main/gpt-oss-20b-uncensored.Q3_K_L.gguf) | Q3_K_L | 13.4 | | | [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-uncensored-GGUF/resolve/main/gpt-oss-20b-uncensored.Q4_K_S.gguf) | Q4_K_S | 14.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-uncensored-GGUF/resolve/main/gpt-oss-20b-uncensored.Q4_K_M.gguf) | Q4_K_M | 15.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-uncensored-GGUF/resolve/main/gpt-oss-20b-uncensored.Q5_K_S.gguf) | Q5_K_S | 16.0 | | | [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-uncensored-GGUF/resolve/main/gpt-oss-20b-uncensored.Q5_K_M.gguf) | Q5_K_M | 17.0 | | | [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-uncensored-GGUF/resolve/main/gpt-oss-20b-uncensored.Q6_K.gguf) | Q6_K | 22.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-uncensored-GGUF/resolve/main/gpt-oss-20b-uncensored.Q8_0.gguf) | Q8_0 | 22.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
anktechsol/anki-2.5
anktechsol
2025-09-24T06:13:35Z
54
0
transformers
[ "transformers", "tensorboard", "safetensors", "qwen2", "text-generation", "indian-languages", "conversational-ai", "localized-ai", "indic-nlp", "multilingual", "hindi", "bengali", "tamil", "telugu", "urdu", "gujarati", "kannada", "malayalam", "punjabi", "odia", "assamese", "marathi", "conversational", "en", "hi", "bn", "ta", "te", "ur", "gu", "kn", "ml", "pa", "or", "as", "mr", "dataset:ai4bharat/indic-corpus", "dataset:indicnlp/hindi-corpus", "dataset:custom-indian-datasets", "license:mit", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-28T04:52:31Z
--- license: mit language: - en - hi - bn - ta - te - ur - gu - kn - ml - pa - or - as - mr tags: - indian-languages - conversational-ai - localized-ai - indic-nlp - multilingual - hindi - bengali - tamil - telugu - urdu - gujarati - kannada - malayalam - punjabi - odia - assamese - marathi pipeline_tag: text-generation library_name: transformers datasets: - ai4bharat/indic-corpus - indicnlp/hindi-corpus - custom-indian-datasets metrics: - perplexity - bleu - rouge model-index: - name: anki-2.5 results: - task: type: text-generation name: Text Generation dataset: type: indian-benchmark name: Indian Language Evaluation metrics: - type: perplexity value: 12.5 name: Perplexity --- # 🇮🇳 Anki 2.5 - Indian Market-Centric LLM <div align="center"> <img src="https://img.shields.io/badge/Language-Indic%20Languages-orange" alt="Languages"> <img src="https://img.shields.io/badge/Base%20Model-Transformer-blue" alt="Base Model"> <img src="https://img.shields.io/badge/Size-494M-green" alt="Model Size"> <img src="https://img.shields.io/badge/License-MIT-yellow" alt="License"> </div> ## 🚀 Model Overview **Anki 2.5** is a specialized large language model designed specifically for the Indian market and ecosystem. Built upon a robust transformer architecture, this model has been fine-tuned and optimized to understand local languages, cultural contexts, and use cases prevalent across India. This model bridges the gap between global AI capabilities and local Indian needs, offering enhanced performance in: - **Indic Language Understanding**: Deep comprehension of Hindi, Bengali, Tamil, Telugu, Urdu, Gujarati, Kannada, Malayalam, Punjabi, Odia, Assamese, and Marathi - **Cultural Context Awareness**: Understanding of Indian customs, festivals, traditions, and social dynamics - **Market-Specific Applications**: Tailored for Indian business scenarios, educational contexts, and daily life interactions ## ✨ Key Features ### 🌐 Indic Language Excellence - **Multi-script Support**: Handles Devanagari, Bengali, Tamil, Telugu, Urdu, Gujarati, and other Indian scripts - **Code-mixing Capability**: Seamlessly processes Hinglish and other Indian English variants - **Regional Dialects**: Understanding of regional variations and colloquialisms ### 💬 Advanced Conversational Ability - **Contextual Conversations**: Maintains context across long dialogues in multiple languages - **Cultural Sensitivity**: Responds appropriately to Indian cultural references and contexts - **Formal & Informal Registers**: Adapts tone based on conversation requirements ### 🎯 Market Specificity - **Indian Business Context**: Understanding of Indian market dynamics, regulations, and practices - **Educational Alignment**: Aligned with Indian educational curricula and learning patterns - **Rural-Urban Bridge**: Capable of addressing both urban and rural use cases effectively ## 🔧 Technical Details ### Architecture - **Base Model**: Transformer (0.5B parameters) - **Fine-tuning**: Specialized training on Indian datasets - **Model Size**: 494M parameters - **Precision**: F32 tensor type - **Context Length**: Up to 8K tokens ### Training Data - **Indic Corpus**: Comprehensive collection from AI4Bharat - **Hindi Literature**: Classical and contemporary Hindi texts - **Multilingual Datasets**: Balanced representation across 12+ Indian languages - **Domain-Specific Data**: Business, education, healthcare, and government domains - **Cultural Content**: Festivals, traditions, mythology, and historical references ### Licensing - **Weights**: Open weights under MIT License - **Commercial Use**: Permitted with attribution - **Research Use**: Fully open for academic and research purposes ## 🎯 Use Cases ### 🎬 Hindi/Indian Language Content Creation ```python # Generate Hindi poetry or stories response = model.generate( "हिंदी में एक सुंदर कविता लिखें होली के बारे में", max_length=200 ) ``` ### 📊 Market Analysis & Business Intelligence - Indian market trend analysis - Customer sentiment analysis in local languages - Regional business strategy recommendations - Compliance and regulatory guidance ### 🌾 Rural Technology Enablement - Agricultural advisory in local languages - Government scheme explanations - Digital literacy support - Local language interfaces for apps ### 🎓 Educational Support - Multilingual tutoring assistance - Curriculum-aligned content generation - Language learning support - Cultural education resources ### 💼 Enterprise Applications - Customer support in regional languages - Document translation and summarization - Indian law and regulation interpretation - HR and recruitment assistance ## 🛠️ How to Use ### Quick Start ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch # Load the model and tokenizer model_name = "anktechsol/anki-2.5" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.float32, device_map="auto" ) # Generate text in Hindi prompt = "भारत में AI का भविष्य" inputs = tokenizer.encode(prompt, return_tensors="pt") with torch.no_grad(): outputs = model.generate( inputs, max_length=100, temperature=0.7, do_sample=True, pad_token_id=tokenizer.eos_token_id ) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ### Advanced Usage ```python # Multi-language conversation conversation = [ {"role": "user", "content": "मुझे अपने बिजनेस के लिए एक मार्केटिंग स्ट्रैटेजी चाहिए।"}, ] # Apply chat template formatted_prompt = tokenizer.apply_chat_template( conversation, tokenize=False, add_generation_prompt=True ) # Generate response inputs = tokenizer(formatted_prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=512, temperature=0.8) response = tokenizer.decode(outputs[0], skip_special_tokens=True) ``` ### Integration with Popular Frameworks ```python # Using with LangChain for Indian applications from langchain.llms.huggingface_pipeline import HuggingFacePipeline from transformers import pipeline # Create pipeline pipe = pipeline( "text-generation", model="anktechsol/anki-2.5", tokenizer="anktechsol/anki-2.5", max_length=512 ) # Wrap with LangChain llm = HuggingFacePipeline(pipeline=pipe) # Use in your Indian language applications response = llm("Explain GST rules in Hindi") ``` ## 🤝 Community & Contributions ### 📢 Call to Action We invite the Indian AI community to: - **🔬 Experiment**: Try the model with your specific use cases and share results - **📝 Feedback**: Report performance insights, especially for regional languages - **🌍 Language Expansion**: Help us improve coverage for underrepresented Indian languages - **🤝 Collaborate**: Contribute training data, evaluation benchmarks, or model improvements - **📚 Research**: Use this model as a foundation for Indian language research ### 💬 Community Channels - **Discussions**: Use the Community tab above for questions and suggestions - **Issues**: Report bugs or request features in our repository - **Research**: Cite this model in your academic work and share findings ### 🎯 Specific Areas Seeking Community Input - **Regional Dialects**: Help improve understanding of local variations - **Domain Expertise**: Contribute specialized knowledge (legal, medical, technical) - **Evaluation Metrics**: Develop Indian language-specific benchmarks - **Cultural Nuances**: Enhance cultural context understanding ## 🙏 Acknowledgments ### 📊 Datasets & Resources - **AI4Bharat**: For the comprehensive Indic language corpus - **IndicNLP**: For Hindi language resources and benchmarks - **CDAC**: For language technology tools and resources - **IIT Madras**: For Tamil language processing contributions - **ISI Kolkata**: For Bengali language datasets ### 🤝 Contributors & Community - **Anktechsol Team**: Core development and fine-tuning - **Indian AI Research Community**: Feedback and validation - **Open Source Contributors**: Bug fixes and improvements - **Beta Testers**: Early adopters who provided crucial feedback ### 🏢 Institutional Support - **Transformer Architecture Community**: For the excellent base model architecture - **Hugging Face**: For model hosting and distribution platform - **Indian Language Technology Consortium**: For linguistic resources ### 📖 Citation If you use this model in your research or applications, please cite: ```bibtex @misc{anki-2.5, title={Anki 2.5: An Indian Market-Centric Large Language Model}, author={Anktechsol}, year={2025}, publisher={Hugging Face}, howpublished={\url{https://huggingface.co/anktechsol/anki-2.5}}, } ``` --- <div align="center"> <b>🚀 Ready to explore AI in Indian languages? Start using Anki 2.5 today!</b> <br> <i>Made with ❤️ for the Indian AI community</i> </div> ## 📋 Model Information | Attribute | Value | |-----------|-------| | Model Size | 494M parameters | | Base Model | Transformer | | Languages | 12+ Indian languages + English | | License | MIT | | Context Length | 8K tokens | | Precision | F32 | | Training Data | Indian-centric multilingual corpus | | Use Cases | Conversational AI, Content Generation, Market Analysis | --- *For technical support, feature requests, or collaborations, please reach out through the Community discussions or contact anktechsol directly.*
dsaddsdsdd/blockassist
dsaddsdsdd
2025-09-24T06:12:29Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stinging darting anteater", "arxiv:2504.07091", "region:us" ]
null
2025-09-12T02:31:38Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stinging darting anteater --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-Adam-FisherMaskToken-1e-5-HessianMaskToken-0.1-v2_7618
luckeciano
2025-09-24T06:10:54Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:DigitalLearningGmbH/MATH-lighteval", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:finetune:Qwen/Qwen2.5-Math-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T01:07:21Z
--- base_model: Qwen/Qwen2.5-Math-7B datasets: DigitalLearningGmbH/MATH-lighteval library_name: transformers model_name: Qwen-2.5-7B-GRPO-NoBaseline-Adam-FisherMaskToken-1e-5-HessianMaskToken-0.1-v2_1400 tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for Qwen-2.5-7B-GRPO-NoBaseline-Adam-FisherMaskToken-1e-5-HessianMaskToken-0.1-v2_1400 This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-Adam-FisherMaskToken-1e-5-HessianMaskToken-0.1-v2_1400", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/ln54bjsi) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.4.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
i229812Haroon/classifai_model
i229812Haroon
2025-09-24T06:10:26Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-09-23T04:38:59Z
--- pipeline_tag: text-classification library_name: transformers tags: - transformers - text-classification --- # Model Card for i229812Haroon/classifai_model This is a fine-tuned **DistilBERT** model for **automated issue report classification**. It predicts categories such as **Bug**, **Feature Request**, or **Improvement** from natural language issue descriptions. --- ## 📝 Model Details - **Owner:** i229812Haroon - **Base Model:** [DistilBERT](https://huggingface.co/distilbert-base-uncased) - **Task:** Text Classification (Issue Reports) - **Framework:** 🤗 Transformers --- ## 🚀 Uses ### Direct Use - Classify new issue reports (from app reviews, bug trackers, or customer support logs). - Can be integrated into tools like **Google Sheets, GitHub bots, or project management dashboards**. ### Downstream Use - Issue triage automation. - Software maintenance & prioritization workflows. ### Out-of-Scope - Not suitable for non-English text (not trained on multilingual data). - May misclassify very short or ambiguous issues. --- ## ⚠️ Limitations & Risks - Works best on well-formed English sentences. - Biases may exist if the training dataset had label imbalance. - Confidence scores should not be treated as absolute truth. --- ## 💻 How to Use ```python from transformers import pipeline # Load the model classifier = pipeline("text-classification", model="i229812Haroon/classifai_model") # Example result = classifier("App crashes when I click save") print(result) # [{'label': 'Bug', 'score': 0.95}]
autoprogrammer/dream_rcr
autoprogrammer
2025-09-24T06:10:13Z
0
0
transformers
[ "transformers", "safetensors", "Dream", "feature-extraction", "text-generation", "conversational", "custom_code", "license:apache-2.0", "region:us" ]
text-generation
2025-09-23T22:30:05Z
--- library_name: transformers license: apache-2.0 pipeline_tag: text-generation --- # Dream-v0-Instruct-7B This is the instruct model of Dream 7B, which is an open diffusion large language model with top-tier performance. More details about the model and usage can be found in the blog and github bellow: - **Blog:** https://hkunlp.github.io/blog/2025/dream/ - **Github:** https://github.com/HKUNLP/Dream
HectorHe/DeepSeek-V2-Lite-aux-free-sft-commonsense-1epoch-1e-4-gamma-share-expert
HectorHe
2025-09-24T06:08:09Z
0
1
transformers
[ "transformers", "safetensors", "deepseek_v2", "text-generation", "generated_from_trainer", "open-r1", "trl", "sft", "conversational", "custom_code", "dataset:fw407/Commonsense-15K", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T03:54:25Z
--- datasets: fw407/Commonsense-15K library_name: transformers model_name: DeepSeek-V2-Lite-aux-free-sft-commonsense-1epoch-1e-4-gamma-share-expert tags: - generated_from_trainer - open-r1 - trl - sft licence: license --- # Model Card for DeepSeek-V2-Lite-aux-free-sft-commonsense-1epoch-1e-4-gamma-share-expert This model is a fine-tuned version of [None](https://huggingface.co/None) on the [fw407/Commonsense-15K](https://huggingface.co/datasets/fw407/Commonsense-15K) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="HectorHe/DeepSeek-V2-Lite-aux-free-sft-commonsense-1epoch-1e-4-gamma-share-expert", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hector_-carnegie-mellon-university/huggingface/runs/7fjn24w2) This model was trained with SFT. ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.51.0 - Pytorch: 2.6.0 - Datasets: 4.1.1 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
zenlm/zen-nano-v1
zenlm
2025-09-24T06:06:13Z
0
0
mlx
[ "mlx", "safetensors", "gguf", "edge-ai", "lightweight", "zen-nano", "en", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-3B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-09-24T05:59:58Z
--- license: apache-2.0 language: - en library_name: mlx tags: - mlx - gguf - edge-ai - lightweight - zen-nano base_model: Qwen/Qwen2.5-3B-Instruct --- # Zen Nano v1.0 An ultra-lightweight AI model optimized for edge devices, jointly developed by **Hanzo AI Inc** (Techstars-backed, LA) and **Zoo Labs Foundation** (501c3, SF). ## Features - 🌐 Runs entirely offline on edge devices - 🔒 Complete privacy - no data leaves your device - 🌊 Eco-friendly with minimal carbon footprint - 📱 Works on phones, tablets, Raspberry Pi - 🆓 Forever free and open source ## Available Files ### MLX Format (Apple Silicon) - `zen-nano-v1-mlx/` - Full precision MLX - `zen-nano-v1-mlx-q4/` - 4-bit quantized (~1-2GB) - `zen-nano-v1-mlx-q8/` - 8-bit quantized (~2-3GB) ### GGUF Format (Universal) - `gguf/zen-nano-v1-f16.gguf` - Full F16 precision - `gguf/zen-nano-v1-q4_k_m.gguf` - 4-bit (recommended for most) - `gguf/zen-nano-v1-q5_k_m.gguf` - 5-bit (better quality) - `gguf/zen-nano-v1-q8_0.gguf` - 8-bit (high quality) ## Quick Start ### MLX (Mac/Apple Silicon) ```python from mlx_lm import load, generate model, tokenizer = load("hanzo-ai/zen-nano-v1-mlx-q4") response = generate(model, tokenizer, prompt="Who are you?", max_tokens=100) print(response) ``` ### Ollama ```bash ollama run hanzo-ai/zen-nano:q4 ``` ### llama.cpp ```bash ./main -m zen-nano-v1-q4_k_m.gguf -p "Who are you?" -n 100 ``` ## Model Details - **Base**: Fine-tuned from Qwen 3B architecture - **Context**: 32K tokens - **Training**: LoRA fine-tuning with identity alignment - **Optimization**: Quantization for edge deployment ## About the Creators **Hanzo AI Inc**: Techstars-backed applied AI research lab based in Los Angeles, building practical AI tools including 100+ MCP development tools. **Zoo Labs Foundation**: 501(c)(3) non-profit in San Francisco dedicated to democratizing AI access while protecting our oceans. ## License Apache 2.0 - Free for any use including commercial. --- *Zen Nano - AI that runs where you are.*
MananSuri27/Qwen2.5-3B-Instruct-GRPO-Certainty-V2-ARGUS-20250923_195755
MananSuri27
2025-09-24T06:05:18Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Qwen2.5-3B-Instruct", "base_model:finetune:unsloth/Qwen2.5-3B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T06:04:03Z
--- base_model: unsloth/Qwen2.5-3B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** MananSuri27 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
f0857057/llama3.2_3B_news_merged
f0857057
2025-09-24T06:04:04Z
0
0
null
[ "safetensors", "llama", "license:apache-2.0", "region:us" ]
null
2025-09-24T05:19:09Z
--- license: apache-2.0 ---
DevParker/VibeVoice7b-low-vram
DevParker
2025-09-24T06:03:52Z
0
35
null
[ "safetensors", "text-to-speech", "speech-synthesis", "quantized", "low-vram", "vibevoice", "en", "license:other", "region:us" ]
text-to-speech
2025-08-31T13:42:18Z
--- license: other language: - en tags: - text-to-speech - speech-synthesis - quantized - low-vram - vibevoice --- # VibeVoice 7B - Low VRAM Quantized Models Pre-quantized versions of VibeVoice 7B for low VRAM GPUs. ## Available Versions - **4bit/** - 4-bit quantized model (~8GB VRAM needed) - **8bit/** - 8-bit quantized model (~12GB VRAM needed) - NOTE: Removed 8 bit until I can test it again. I'll re-up it soon. ## Usage ```python from vibevoice.modular.modeling_vibevoice_inference import VibeVoiceForConditionalGenerationInference from vibevoice.processor.vibevoice_processor import VibeVoiceProcessor # For 4-bit model model = VibeVoiceForConditionalGenerationInference.from_pretrained( "Dannidee/VibeVoice7b-low-vram/4bit", device_map='cuda', torch_dtype=torch.bfloat16, ) processor = VibeVoiceProcessor.from_pretrained("Dannidee/VibeVoice7b-low-vram/4bit") ``` ## VRAM Requirements - **4-bit**: ~8 GB total VRAM - **8-bit**: ~12 GB total VRAM - **Original**: ~19 GB total VRAM See individual model folders for detailed information.
elec1204/llama3.2_3B_news_merged
elec1204
2025-09-24T06:03:46Z
0
0
null
[ "llama", "license:apache-2.0", "region:us" ]
null
2025-09-24T05:31:38Z
--- license: apache-2.0 ---
lightx2v/Wan2.2-Lightning
lightx2v
2025-09-24T06:03:36Z
0
377
null
[ "safetensors", "text-to-video;", "image-to-video;", "comfyUI;", "video-generation;", "text-to-video", "en", "arxiv:2309.14509", "base_model:Wan-AI/Wan2.2-I2V-A14B", "base_model:finetune:Wan-AI/Wan2.2-I2V-A14B", "license:apache-2.0", "region:us" ]
text-to-video
2025-08-04T13:29:49Z
--- language: en license: apache-2.0 base_model: - Wan-AI/Wan2.2-T2V-A14B - Wan-AI/Wan2.2-I2V-A14B - Wan-AI/Wan2.2-TI2V-5B pipeline_tag: text-to-video tags: - text-to-video; - image-to-video; - comfyUI; - video-generation; --- You're welcome to visit our [GitHub repository](https://github.com/ModelTC/Wan2.2-Lightning) for the latest model releases or to reproduce our results. # Wan2.2-Lightning <!-- [**Wan2.2-Lightning: Distill Wan2.2 Family into 4 Steps**] <be> --> We are excited to release the distilled version of <a href="https://wan.video"><b>Wan2.2</b></a> video generation model family, which offers the following advantages: - **Fast**: Video generation now requires only 4 steps without the need of CFG trick, leading to x20 speed-up - **High-quality**: The distilled model delivers visuals on par with the base model in most scenarios, sometimes even better. - **Complex Motion Generation**: Despite the reduction to just 4 steps, the model retains excellent motion dynamics in the generated scenes. ## 🔥 Latest News!! * Aug 08, 2025: 👋 Release of Native ComfyUI Workflows. <!-- and [lora weights](https://hf-mirror.com/lightx2v/Wan2.2-Lightning/tree/main) for the `Wan2.2-Lightning` models! --> <!-- Choose one of These new [weights](https://hf-mirror.com/lightx2v/Wan2.2-Lightning/tree/main) are also compatible with [Kijai's ComfyUI WanVideoWrapper](https://github.com/kijai/ComfyUI-WanVideoWrapper). --> <table align="center"> <thead> <tr> <th>Model</th> <th>Type</th> <th>For Native Comfy</th> <th>For Kijai's Wrapper</th> </tr> </thead> <tbody> <tr> <td><b>Wan2.2-I2V-A14B-NFE4-V1</b></td> <td>Image-to-Video</td> <td><a href="https://huggingface.co/lightx2v/Wan2.2-Lightning/blob/main/Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1/Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1-NativeComfy.json">I2V-V1-WF</a></td> <td><a href="https://huggingface.co/lightx2v/Wan2.2-Lightning/blob/main/Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1/Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1-forKJ.json">I2V-V1-WF</a></td> </tr> <tr> <td><b>Wan2.2-T2V-A14B-NFE4-V1.1</b></td> <td>Text-to-Video</td> <td><a href="https://huggingface.co/lightx2v/Wan2.2-Lightning/blob/main/Wan2.2-T2V-A14B-4steps-lora-rank64-Seko-V1.1/Wan2.2-T2V-A14B-4steps-lora-rank64-Seko-V1.1-NativeComfy.json">T2V-V1.1-WF</a></td> <td><a href="https://huggingface.co/lightx2v/Wan2.2-Lightning/blob/main/Wan2.2-T2V-A14B-4steps-lora-rank64-Seko-V1.1/Wan2.2-T2V-A14B-4steps-lora-rank64-Seko-V1.1-forKJ.json">T2V-V1.1-WF</a></td> </tr> <!-- <tr> <td><b>Wan2.2-T2V-A14B-NFE4-V1</b></td> <td>Text-to-Video</td> <td><a href="https://hf-mirror/lightx2v/Wan2.2-Lightning/blob/main/Wan2.2-T2V-A14B-4steps-lora-rank64-Seko-V1/Wan2.2-T2V-A14B-4steps-lora-rank64-Seko-V1-NativeComfy.json">Workflow</a></td> </tr> --> </tbody> </table> * Aug 07, 2025: 👋 Release of [Wan2.2-I2V-A14B-NFE4-V1](https://huggingface.co/lightx2v/Wan2.2-Lightning/tree/main/Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1). <!-- A [workflow](https://hf-mirror.com/lightx2v/Wan2.2-Lightning/blob/main/Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1/Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1-forKJ.json) compatible with [Kijai's ComfyUI WanVideoWrapper](https://github.com/kijai/ComfyUI-WanVideoWrapper) is inside this link. Enjoy! --> * Aug 07, 2025: 👋 Release of [Wan2.2-T2V-A14B-NFE4-V1.1](https://huggingface.co/lightx2v/Wan2.2-Lightning/tree/main/Wan2.2-T2V-A14B-4steps-lora-rank64-Seko-V1.1). The generation quality of V1.1 is slightly better than V1. <!-- A [workflow](https://hf-mirror.com/lightx2v/Wan2.2-Lightning/blob/main/Wan2.2-T2V-A14B-4steps-lora-rank64-Seko-V1.1/Wan2.2-T2V-A14B-4steps-lora-rank64-Seko-V1.1-forKJ.json) compatible with [Kijai's ComfyUI WanVideoWrapper](https://github.com/kijai/ComfyUI-WanVideoWrapper) is inside this link. The generation quality of V1.1 is slightly better than V1. Enjoy! --> * Aug 04, 2025: 👋 Release of [Wan2.2-T2V-A14B-NFE4-V1](https://huggingface.co/lightx2v/Wan2.2-Lightning/tree/main/Wan2.2-T2V-A14B-4steps-lora-rank64-Seko-V1). ## Video Demos ### Wan2.2-I2V-A14B-NFE4-V1 Demo The videos below can be reproduced using [examples/i2v_prompt_list.txt](examples/i2v_prompt_list.txt) and [examples/i2v_image_path_list.txt](examples/i2v_image_path_list.txt). <table border="0" style="width: 100%; text-align: left; margin-top: 20px;"> <tr> <td> <video src="https://github.com/user-attachments/assets/4f6bb1e0-9e2b-4eb2-8b9f-0678ccd5b4ec" width="100%" controls loop></video> </td> <td> <video src="https://github.com/user-attachments/assets/bb249553-3f52-40b3-88f9-6e3bca1a8358" width="100%" controls loop></video> </td> <td> <video src="https://github.com/user-attachments/assets/17a6d26a-dd63-47ef-9a98-1502f503dfba" width="100%" controls loop></video> </td> </tr> <tr> <td> <video src="https://github.com/user-attachments/assets/6ccc69cf-e129-456f-8b93-6dc709cb0ede" width="100%" controls loop></video> </td> <td> <video src="https://github.com/user-attachments/assets/6cf9c586-f37a-47ed-ab5b-e106c3877fa8" width="100%" controls loop></video> </td> <td> <video src="https://github.com/user-attachments/assets/27e82fdf-88af-44ac-b987-b48aa3f9f793" width="100%" controls loop></video> </td> </tr> <tr> <td> <video src="https://github.com/user-attachments/assets/36a76f1d-2b64-4b16-a862-210d0ffd6d55" width="100%" controls loop></video> </td> <td> <video src="https://github.com/user-attachments/assets/4bc36c70-931e-4539-be8c-432d832819d3" width="100%" controls loop></video> </td> <td> <video src="https://github.com/user-attachments/assets/488b9179-741b-4b9d-8f23-895981f054cb" width="100%" controls loop></video> </td> </tr> </table> ### Wan2.2-T2V-A14B-NFE4-V1 Demo The videos below can be reproduced using [examples/prompt_list.txt](examples/prompt_list.txt). <table border="0" style="width: 100%; text-align: left; margin-top: 20px;"> <tr> <td> <video src="https://github.com/user-attachments/assets/ae791fbb-ef4a-4f72-989a-2ac862883201" width="100%" controls loop></video> </td> <td> <video src="https://github.com/user-attachments/assets/f8083a50-25a0-42a8-9cd1-635f99588b19" width="100%" controls loop></video> </td> <td> <video src="https://github.com/user-attachments/assets/5f15826b-b07b-49a2-a522-f2caea0adc60" width="100%" controls loop></video> </td> </tr> <tr> <td> <video src="https://github.com/user-attachments/assets/9e48c7c2-f1a1-4d94-ade0-11e1aa913cb7" width="100%" controls loop></video> </td> <td> <video src="https://github.com/user-attachments/assets/45ae83df-af1e-4506-b00e-7d413a0dfa51" width="100%" controls loop></video> </td> <td> <video src="https://github.com/user-attachments/assets/554dd476-d9c1-49df-b6e1-d129113cb2be" width="100%" controls loop></video> </td> </tr> <tr> <td> <video src="https://github.com/user-attachments/assets/f22b8c0f-9e40-418d-8cd5-153da3678093" width="100%" controls loop></video> </td> <td> <video src="https://github.com/user-attachments/assets/2fc03af0-7c76-48e5-ab12-fc222164ec64" width="100%" controls loop></video> </td> <td> <video src="https://github.com/user-attachments/assets/a8d07ae6-f037-4518-9b13-4a6702a3e0ae" width="100%" controls loop></video> </td> </tr> </table> ### Wan2.2-T2V-A14B-NFE4 Limitation When the video contains elements with extremely large motion, the generated results may include artifacts. In some results, the direction of the vehicles may be reversed. <table border="0" style="width: 100%; text-align: left; margin-top: 20px;"> <tr> <td> <video src="https://github.com/user-attachments/assets/db8f4240-7feb-4b95-8851-c52220ece9dc" width="100%" controls loop></video> </td> <td> <video src="https://github.com/user-attachments/assets/43820463-22e0-41aa-a446-e0f130ef80d0" width="100%" controls loop></video> </td> <td> <video src="https://github.com/user-attachments/assets/8a0580eb-2b35-4548-abcb-45fc0df12ff0" width="100%" controls loop></video> </td> </tr> </table> ## 📑 Todo List - [x] Wan2.2-T2V-A14B-4steps - [x] Wan2.2-I2V-A14B-4steps - [ ] Wan2.2-TI2V-5B-4steps ## 🚀 Run Wan2.2-Lightning #### Installation Please follow [Wan2.2 Official Github](https://github.com/Wan-Video/Wan2.2/) to install the **Python Environment** and download the **Base Model**. #### Model Download Download models using huggingface-cli: ``` sh pip install "huggingface_hub[cli]" huggingface-cli download Wan-AI/Wan2.2-T2V-A14B --local-dir ./Wan2.2-T2V-A14B huggingface-cli download lightx2v/Wan2.2-Lightning --local-dir ./Wan2.2-Lightning ``` #### Run Text-to-Video Generation This repository supports the `Wan2.2-T2V-A14B` Text-to-Video model and can simultaneously support video generation at 480P and 720P resolutions, either portrait or landscape. ##### (1) Without Prompt Extension To facilitate implementation, we will start with a basic version of the inference process that skips the [prompt extension](#2-using-prompt-extention) step. - Single-GPU, Single-prompt inference ``` sh python generate.py --task t2v-A14B --size "1280*720" --ckpt_dir ./Wan2.2-T2V-A14B --lora_dir ./Wan2.2-Lightning/Wan2.2-T2V-A14B-4steps-lora-rank64-Seko-V1 --offload_model True --base_seed 42 --prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage." ``` - Single-GPU, Multiple-prompt inference ``` sh python generate.py --task t2v-A14B --size "1280*720" --ckpt_dir ./Wan2.2-T2V-A14B --lora_dir ./Wan2.2-Lightning/Wan2.2-T2V-A14B-4steps-lora-rank64-Seko-V1 --offload_model True --base_seed 42 --prompt_file examples/prompt_list.txt ``` > 💡 This command can run on a GPU with at least 80GB VRAM. > 💡If you encounter OOM (Out-of-Memory) issues, you can use the `--offload_model True`, `--convert_model_dtype` and `--t5_cpu` options to reduce GPU memory usage. - Multi-GPU inference using FSDP + DeepSpeed Ulysses We use [PyTorch FSDP](https://docs.pytorch.org/docs/stable/fsdp.html) and [DeepSpeed Ulysses](https://arxiv.org/abs/2309.14509) to accelerate inference. ``` sh torchrun --nproc_per_node=8 generate.py --task t2v-A14B --size "1280*720" --ckpt_dir ./Wan2.2-T2V-A14B --lora_dir ./Wan2.2-Lightning/Wan2.2-T2V-A14B-4steps-lora-rank64-Seko-V1 --dit_fsdp --t5_fsdp --ulysses_size 8 --base_seed 42 --prompt_file examples/prompt_list.txt ``` ##### (2) Using Prompt Extension Extending the prompts can effectively enrich the details in the generated videos, further enhancing the video quality. Therefore, we recommend enabling prompt extension. We provide the following two methods for prompt extension: - Use the Dashscope API for extension. - Apply for a `dashscope.api_key` in advance ([EN](https://www.alibabacloud.com/help/en/model-studio/getting-started/first-api-call-to-qwen) | [CN](https://help.aliyun.com/zh/model-studio/getting-started/first-api-call-to-qwen)). - Configure the environment variable `DASH_API_KEY` to specify the Dashscope API key. For users of Alibaba Cloud's international site, you also need to set the environment variable `DASH_API_URL` to 'https://dashscope-intl.aliyuncs.com/api/v1'. For more detailed instructions, please refer to the [dashscope document](https://www.alibabacloud.com/help/en/model-studio/developer-reference/use-qwen-by-calling-api?spm=a2c63.p38356.0.i1). - Use the `qwen-plus` model for text-to-video tasks and `qwen-vl-max` for image-to-video tasks. - You can modify the model used for extension with the parameter `--prompt_extend_model`. For example: ```sh DASH_API_KEY=your_key torchrun --nproc_per_node=8 generate.py --task t2v-A14B --size 1280*720 --ckpt_dir ./Wan2.2-T2V-A14B --lora_dir ./Wan2.2-Lightning/Wan2.2-T2V-A14B-4steps-lora-rank64-Seko-V1 --dit_fsdp --t5_fsdp --ulysses_size 8 --prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage" --use_prompt_extend --prompt_extend_method 'dashscope' --prompt_extend_target_lang 'zh' ``` - Using a local model for extension. - By default, the Qwen model on HuggingFace is used for this extension. Users can choose Qwen models or other models based on the available GPU memory size. - For text-to-video tasks, you can use models like `Qwen/Qwen2.5-14B-Instruct`, `Qwen/Qwen2.5-7B-Instruct` and `Qwen/Qwen2.5-3B-Instruct`. - For image-to-video tasks, you can use models like `Qwen/Qwen2.5-VL-7B-Instruct` and `Qwen/Qwen2.5-VL-3B-Instruct`. - Larger models generally provide better extension results but require more GPU memory. - You can modify the model used for extension with the parameter `--prompt_extend_model` , allowing you to specify either a local model path or a Hugging Face model. For example: ``` sh torchrun --nproc_per_node=8 generate.py --task t2v-A14B --size 1280*720 --ckpt_dir ./Wan2.2-T2V-A14B --lora_dir ./Wan2.2-Lightning/Wan2.2-T2V-A14B-4steps-lora-rank64-Seko-V1 --dit_fsdp --t5_fsdp --ulysses_size 8 --prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage" --use_prompt_extend --prompt_extend_method 'local_qwen' --prompt_extend_target_lang 'zh' ``` #### Run Image-to-Video Generation This repository supports the `Wan2.2-I2V-A14B` Image-to-Video model and can simultaneously support video generation at 480P and 720P resolutions. - Single-GPU inference ```sh python generate.py --task i2v-A14B --size "1280*720" --ckpt_dir ./Wan2.2-I2V-A14B --lora_dir ./Wan2.2-Lightning/Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1 --offload_model True --base_seed 42 --prompt_file examples/i2v_prompt_list.txt --image_path_file examples/i2v_image_path_list.txt ``` > This command can run on a GPU with at least 80GB VRAM. > 💡For the Image-to-Video task, the `size` parameter represents the area of the generated video, with the aspect ratio following that of the original input image. - Multi-GPU inference using FSDP + DeepSpeed Ulysses ```sh torchrun --nproc_per_node=8 generate.py --task i2v-A14B --size 1280*720 --ckpt_dir ./Wan2.2-I2V-A14B --lora_dir ./Wan2.2-Lightning/Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1 --dit_fsdp --t5_fsdp --ulysses_size 8 --base_seed 42 --prompt_file examples/i2v_prompt_list.txt --image_path_file examples/i2v_image_path_list.txt ``` <!-- - Image-to-Video Generation without prompt ```sh DASH_API_KEY=your_key torchrun --nproc_per_node=8 generate.py --task i2v-A14B --size 1280*720 --ckpt_dir ./Wan2.2-I2V-A14B --prompt '' --image examples/i2v_input.JPG --dit_fsdp --t5_fsdp --ulysses_size 8 --use_prompt_extend --prompt_extend_method 'dashscope' ``` > 💡The model can generate videos solely from the input image. You can use prompt extension to generate prompt from the image. > The process of prompt extension can be referenced [here](#2-using-prompt-extention). #### Run Text-Image-to-Video Generation This repository supports the `Wan2.2-TI2V-5B` Text-Image-to-Video model and can support video generation at 720P resolutions. - Single-GPU Text-to-Video inference ```sh python generate.py --task ti2v-5B --size 1280*704 --ckpt_dir ./Wan2.2-TI2V-5B --offload_model True --convert_model_dtype --t5_cpu --prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage" ``` > 💡Unlike other tasks, the 720P resolution of the Text-Image-to-Video task is `1280*704` or `704*1280`. > This command can run on a GPU with at least 24GB VRAM (e.g, RTX 4090 GPU). > 💡If you are running on a GPU with at least 80GB VRAM, you can remove the `--offload_model True`, `--convert_model_dtype` and `--t5_cpu` options to speed up execution. - Single-GPU Image-to-Video inference ```sh python generate.py --task ti2v-5B --size 1280*704 --ckpt_dir ./Wan2.2-TI2V-5B --offload_model True --convert_model_dtype --t5_cpu --image examples/i2v_input.JPG --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside." ``` > 💡If the image parameter is configured, it is an Image-to-Video generation; otherwise, it defaults to a Text-to-Video generation. > 💡Similar to Image-to-Video, the `size` parameter represents the area of the generated video, with the aspect ratio following that of the original input image. - Multi-GPU inference using FSDP + DeepSpeed Ulysses ```sh torchrun --nproc_per_node=8 generate.py --task ti2v-5B --size 1280*704 --ckpt_dir ./Wan2.2-TI2V-5B --dit_fsdp --t5_fsdp --ulysses_size 8 --image examples/i2v_input.JPG --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside." ``` > The process of prompt extension can be referenced [here](#2-using-prompt-extension). --> ## License Agreement The models in this repository are licensed under the Apache 2.0 License. We claim no rights over the your generated contents, granting you the freedom to use them while ensuring that your usage complies with the provisions of this license. You are fully accountable for your use of the models, which must not involve sharing any content that violates applicable laws, causes harm to individuals or groups, disseminates personal information intended for harm, spreads misinformation, or targets vulnerable populations. For a complete list of restrictions and details regarding your rights, please refer to the full text of the [license](LICENSE.txt). ## Acknowledgements We built upon and reused code from the following projects: [Wan2.1](https://github.com/Wan-Video/Wan2.1), [Wan2.2](https://github.com/Wan-Video/Wan2.2), licensed under the Apache License 2.0. We also adopt the evaluation text prompts from [Movie Gen Bench](https://github.com/facebookresearch/MovieGenBench), which is licensed under the Creative Commons Attribution-NonCommercial 4.0 (CC BY-NC 4.0) License. The original license can be found [here](https://github.com/facebookresearch/MovieGenBench/blob/main/LICENSE). The selected prompts are further enhanced using the `Qwen/Qwen2.5-14B-Instruct`model [Qwen](https://huggingface.co/Qwen).
harry56183/llama3.2_3B_news_qlora
harry56183
2025-09-24T06:03:07Z
0
0
null
[ "safetensors", "llama", "license:apache-2.0", "region:us" ]
null
2025-09-24T05:25:24Z
--- license: apache-2.0 ---
ying718/llama3.2_3B_news_merged
ying718
2025-09-24T06:03:04Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-24T06:03:04Z
--- license: apache-2.0 ---
ying718/llama3.2_3B_news_qlora
ying718
2025-09-24T06:02:51Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-24T06:02:51Z
--- license: apache-2.0 ---
f0857057/llama3.2_3B_news_qlora
f0857057
2025-09-24T06:02:27Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit", "lora", "sft", "transformers", "trl", "unsloth", "text-generation", "conversational", "arxiv:1910.09700", "region:us" ]
text-generation
2025-09-24T05:18:58Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit - lora - sft - transformers - trl - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
CynthChen/llama3.2_3B_news_merged
CynthChen
2025-09-24T06:01:49Z
0
0
null
[ "safetensors", "llama", "license:apache-2.0", "region:us" ]
null
2025-09-24T05:25:38Z
--- license: apache-2.0 ---
jimmyluzan/llama3.2_3B_news_merged
jimmyluzan
2025-09-24T06:01:31Z
0
0
null
[ "safetensors", "llama", "license:apache-2.0", "region:us" ]
null
2025-09-24T05:33:28Z
--- license: apache-2.0 ---
EllenLin/llama3.2_3B_news_qlora
EllenLin
2025-09-24T06:01:28Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit", "lora", "sft", "transformers", "trl", "unsloth", "text-generation", "conversational", "arxiv:1910.09700", "region:us" ]
text-generation
2025-09-24T06:00:08Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit - lora - sft - transformers - trl - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
21et/llama3.2_3B_news_qlora
21et
2025-09-24T06:00:57Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit", "lora", "sft", "transformers", "trl", "unsloth", "text-generation", "conversational", "arxiv:1910.09700", "region:us" ]
text-generation
2025-09-24T05:19:29Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit - lora - sft - transformers - trl - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
harry56183/llama2_uuu_news_qlora
harry56183
2025-09-24T06:00:17Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit", "lora", "sft", "transformers", "trl", "unsloth", "text-generation", "conversational", "arxiv:1910.09700", "region:us" ]
text-generation
2025-09-23T05:16:15Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit - lora - sft - transformers - trl - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
jimmyluzan/llama3.2_3B_news_qlora
jimmyluzan
2025-09-24T05:59:53Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit", "lora", "sft", "transformers", "trl", "unsloth", "text-generation", "conversational", "arxiv:1910.09700", "region:us" ]
text-generation
2025-09-24T05:33:15Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit - lora - sft - transformers - trl - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-Adam-FisherMaskToken-1e-5-HessianMaskToken-0.01-v2_3862
luckeciano
2025-09-24T05:59:51Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:DigitalLearningGmbH/MATH-lighteval", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:finetune:Qwen/Qwen2.5-Math-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T01:34:59Z
--- base_model: Qwen/Qwen2.5-Math-7B datasets: DigitalLearningGmbH/MATH-lighteval library_name: transformers model_name: Qwen-2.5-7B-GRPO-NoBaseline-Adam-FisherMaskToken-1e-5-HessianMaskToken-0.01-v2_4270 tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for Qwen-2.5-7B-GRPO-NoBaseline-Adam-FisherMaskToken-1e-5-HessianMaskToken-0.01-v2_4270 This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-Adam-FisherMaskToken-1e-5-HessianMaskToken-0.01-v2_4270", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/tfgdwwhd) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.4.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
CraftJarvis/minecraft-openha-qwen2vl-7b-2509
CraftJarvis
2025-09-24T05:59:10Z
14
1
transformers
[ "transformers", "safetensors", "qwen2_vl", "image-to-text", "image-text-to-text", "conversational", "dataset:CraftJarvis/minecraft-motion-coa-dataset", "dataset:CraftJarvis/minecraft-grounding-coa-dataset", "dataset:CraftJarvis/minecraft-motion-action-dataset", "dataset:CraftJarvis/minecraft-grounding-action-dataset", "dataset:CraftJarvis/minecraft-text-action-dataset", "arxiv:2509.13347", "base_model:Qwen/Qwen2-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2-VL-7B-Instruct", "license:mit", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-09-12T09:35:39Z
--- library_name: transformers license: mit datasets: - CraftJarvis/minecraft-motion-coa-dataset - CraftJarvis/minecraft-grounding-coa-dataset - CraftJarvis/minecraft-motion-action-dataset - CraftJarvis/minecraft-grounding-action-dataset - CraftJarvis/minecraft-text-action-dataset metrics: - accuracy base_model: - Qwen/Qwen2-VL-7B-Instruct pipeline_tag: image-text-to-text arxiv: 2509.13347 --- # Minecraft-Openha-Qwen2vl-7b-2509 <!-- <div align="left"> <a href="https://craftjarvis.github.io/"><img alt="Homepage" src="https://img.shields.io/badge/%20CraftJarvis-HomePage-ffc107?color=blue&logoColor=white"/></a> <a href="https://github.com/CraftJarvis/OpenHA"><img alt="Github" src="https://img.shields.io/badge/%F0%9F%A4%97%20Github-CraftJarvis-ffc107?color=3b65ab&logoColor=white"/></a> <a href="https://arxiv.org/abs/2509.13347"><img src="https://img.shields.io/badge/arXiv-2509.13347-b31b1b.svg"></a> <a href="https://github.com/CraftJarvis/OpenHA/blob/master/LICENSE"><img src="https://img.shields.io/badge/Code License-MIT-blue"/></a> </div> --> <!-- **minecraft-openha-qwen2vl-7b-2509** is part of the **OpenHA** suite, introduced in our paper [OpenHA: A Series of Open-Source Hierarchical Agentic Models in Minecraft](https://huggingface.co/papers/2509.13347). --> ## ✨ Highlights This model is built on Qwen2-VL-7B-Instruct and introduces two key innovations: - Chain of Action (CoA): bridges reasoning and control by using abstracted actions as thoughts. - All-in-One training: unifies motion, grounding, and text actions into a single framework, enabling broad generalization beyond specialist agents. ## 💻 Usage You can download and use this model with: ```sh python examples/rollout_openha.py \ --output_mode text_action \ --vlm_client_mode hf \ --system_message_tag text_action \ --model_ips localhost --model_ports 11000 \ --model_id minecraft-openha-qwen2vl-7b-2509 \ --record_path "~/evaluate" \ --max_steps_num 200 \ --num_rollouts 8 ``` <!-- For more details, please refer to our [code repository](https://github.com/CraftJarvis/OpenHA). --> <!-- ## 📚 Citation ```bibtex @article{wang2025openha, title={OpenHA: A Series of Open-Source Hierarchical Agentic Models in Minecraft}, author={Zihao Wang and Muyao Li and Kaichen He and Xiangyu Wang and Zhancun Mu and Anji Liu and Yitao Liang}, journal = {arXiv preprint arXiv:2509.13347}, year={2025}, url={https://arxiv.org/abs/2509.13347}, } ``` -->
timwu520/llama3.2_3B_news_qlora
timwu520
2025-09-24T05:58:42Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit", "lora", "sft", "transformers", "trl", "unsloth", "text-generation", "conversational", "arxiv:1910.09700", "region:us" ]
text-generation
2025-09-24T05:33:02Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit - lora - sft - transformers - trl - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMD-0.1-mnt64-0924015926-epoch-5
vectorzhou
2025-09-24T05:57:52Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "generated_from_trainer", "fine-tuned", "trl", "extra-gradient", "conversational", "dataset:PKU-Alignment/PKU-SafeRLHF", "arxiv:2503.08942", "base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT", "base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T05:57:11Z
--- base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT datasets: PKU-Alignment/PKU-SafeRLHF library_name: transformers model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMD-0.1-mnt64 tags: - generated_from_trainer - text-generation - fine-tuned - trl - extra-gradient licence: license --- # Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMD-0.1-mnt64 This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMD-0.1-mnt64-0924015926-epoch-5", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/tlrhtpd7) This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942). ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.2 - Pytorch: 2.8.0+cu128 - Datasets: 4.1.1 - Tokenizers: 0.22.1 ## Citations Cite Extragradient as: ```bibtex @misc{zhou2025extragradientpreferenceoptimizationegpo, title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback}, author={Runlong Zhou and Maryam Fazel and Simon S. Du}, year={2025}, eprint={2503.08942}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2503.08942}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
salwork/SmolLM2-360M-Arch-Talk-Chat-Model
salwork
2025-09-24T05:55:33Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "arch-talk", "everyday-conversations", "smoltalk", "sft", "conversational", "base_model:HuggingFaceTB/SmolLM2-360M", "base_model:finetune:HuggingFaceTB/SmolLM2-360M", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T03:23:36Z
--- base_model: HuggingFaceTB/SmolLM2-360M library_name: transformers model_name: SmolLM2-360M-Arch-Talk-Chat-Model tags: - generated_from_trainer - trl - arch-talk - everyday-conversations - smoltalk - sft licence: license --- # Model Card for SmolLM2-360M-Arch-Talk-Chat-Model This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-360M](https://huggingface.co/HuggingFaceTB/SmolLM2-360M). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="salwork/SmolLM2-360M-Arch-Talk-Chat-Model", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.1 - Pytorch: 2.8.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Tank-123/smolvla_so101_test_0922
Tank-123
2025-09-24T05:55:26Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "dataset:Tank-123/record-test-0922", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-09-24T05:54:50Z
--- base_model: lerobot/smolvla_base datasets: Tank-123/record-test-0922 library_name: lerobot license: apache-2.0 model_name: smolvla pipeline_tag: robotics tags: - robotics - smolvla - lerobot --- # Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python -m lerobot.scripts.train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` *Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.* ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details * **License:** apache-2.0
AlekseyCalvin/LYRICAL_MT_ru2en_23_Gemma312b_quant
AlekseyCalvin
2025-09-24T05:52:01Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:gghfez/gemma-3-12b-novision", "base_model:quantized:gghfez/gemma-3-12b-novision", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-09-24T05:49:27Z
--- base_model: gghfez/gemma-3-12b-novision tags: - text-generation-inference - transformers - unsloth - gemma3_text license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** AlekseyCalvin - **License:** apache-2.0 - **Finetuned from model :** gghfez/gemma-3-12b-novision This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
george2cool36/hw2_image_automl_autogluon
george2cool36
2025-09-24T05:51:13Z
0
0
autogluon
[ "autogluon", "automl", "image-classification", "neural-network", "computer-vision", "education", "dataset:ecopus/sign_identification", "license:mit", "model-index", "region:us" ]
image-classification
2025-09-22T06:17:16Z
--- license: mit tags: - automl - autogluon - image-classification - neural-network - computer-vision - education library_name: autogluon datasets: - ecopus/sign_identification model-index: - name: HW2 Neural AutoML — AutoGluon MultiModalPredictor (Signs) results: - task: name: Image Classification type: image-classification dataset: name: ecopus/sign_identification type: ecopus/sign_identification split: test metrics: - name: Accuracy type: accuracy value: 0.4286 - name: Macro-F1 type: f1 value: 0.3 --- # HW2 Neural AutoML — AutoGluon MultiModalPredictor (Signs) **Course:** 24-679 — HW2: Models **Task:** Supervised image classification on a classmate’s dataset (`ecopus/sign_identification`) using **AutoGluon Multimodal**. **Goal:** Build a compact model under a small compute budget and report results with a clear, reproducible pipeline. ## Summary - **Backbone:** `resnet18` (via timm) - **Input resolution:** 224×224 (images resized in Colab) - **Train/Val/Test:** ~64% / 16% / 20% split (stratified) - **Epochs:** 3 (short budget, early-stop not overridden) - **Batch size:** 8 - **Metric (val):** Accuracy + Macro-F1 - **Result (test):** Accuracy = **0.4286**, Macro-F1 = **0.3** ## Dataset - **Source:** `ecopus/sign_identification` - **Task:** Multiclass sign recognition - **Classes:** [N_CLASSES] (e.g., list them if short) - **Preprocessing:** - `datasets` → decode to `PIL` - Resize to 224×224, RGB - Labels normalized to integers/strings for AutoGluon ## Training & AutoML Setup **Library:** `autogluon.multimodal.MultiModalPredictor` **Problem type:** `multiclass` **Eval metric:** `accuracy` (Macro-F1 also reported)
shaurya0512/w2v-bert-2.0-arabic-colab-CV16.0
shaurya0512
2025-09-24T05:48:07Z
0
0
null
[ "safetensors", "wav2vec2-bert", "generated_from_trainer", "dataset:common_voice_16_0", "base_model:facebook/w2v-bert-2.0", "base_model:finetune:facebook/w2v-bert-2.0", "license:mit", "model-index", "region:us" ]
null
2025-09-24T04:06:02Z
--- license: mit base_model: facebook/w2v-bert-2.0 tags: - generated_from_trainer datasets: - common_voice_16_0 metrics: - wer model-index: - name: w2v-bert-2.0-arabic-colab-CV16.0 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_16_0 type: common_voice_16_0 config: ar split: None args: ar metrics: - name: Wer type: wer value: 1.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # w2v-bert-2.0-arabic-colab-CV16.0 This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the common_voice_16_0 dataset. It achieves the following results on the evaluation set: - Loss: nan - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 512 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 600 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:---:| | 0.0 | 3.9474 | 300 | nan | 1.0 | | 0.0 | 7.8947 | 600 | nan | 1.0 | ### Framework versions - Transformers 4.44.0 - Pytorch 2.8.0+cu128 - Datasets 2.21.0 - Tokenizers 0.19.1
DennisS1/fngrL
DennisS1
2025-09-24T05:47:25Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:Qwen/Qwen-Image", "base_model:adapter:Qwen/Qwen-Image", "region:us" ]
text-to-image
2025-09-24T05:46:23Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - output: url: images/Screen Shot 2025-09-24 at 3.46.20 pm.png text: Screenshot base_model: Qwen/Qwen-Image instance_prompt: fiinger, fingering --- # fingrL <Gallery /> ## Trigger words You should use `fiinger` to trigger the image generation. You should use `fingering` to trigger the image generation. ## Download model [Download](/DennisS1/fngrL/tree/main) them in the Files & versions tab.
Lansy/llama3.2_3B_news_merged
Lansy
2025-09-24T05:46:04Z
0
0
null
[ "safetensors", "llama", "license:apache-2.0", "region:us" ]
null
2025-09-24T05:19:40Z
--- license: apache-2.0 ---
Bracingsim/racecrew-embedding_router-memvid
Bracingsim
2025-09-24T05:45:21Z
0
0
null
[ "gemma", "racing", "motorsport", "ai-assistant", "memory-integration", "text-generation", "en", "dataset:Bracingsim/racing-team-radio", "license:mit", "region:us" ]
text-generation
2025-09-24T05:45:18Z
--- language: en license: mit tags: - racing - motorsport - ai-assistant - memory-integration - gemma datasets: - Bracingsim/racing-team-radio pipeline_tag: text-generation --- # Embedding Router - Memory Integrated This is a memory-integrated racing AI model specialized for **Embedding Router - Intelligence routing hub** tasks in professional motorsport environments. ## Model Details - **Model Name**: embedding_router - **Base Model**: google/gemma-2-2b-it - **Memory Integration**: Active with Memvid video-based persistent memory - **Training Date**: 2025-09-24 - **Memory Features**: routing_patterns, classification - **Dataset Size**: 30 memory-enhanced samples ## Specialized Capabilities This model has been trained with **Memvid memory integration** to provide: - **Historical Context Awareness**: Learns from previous racing scenarios and decisions - **Memory-Enhanced Responses**: Uses persistent memory to improve decision quality - **Role-Specific Expertise**: Specialized for embedding router - intelligence routing hub - **Real-time Learning**: Integrates new experiences into memory bank ## Memory Features The model implements these specialized memory features: - **routing_patterns**: Advanced routing patterns capabilities - **classification**: Advanced classification capabilities ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Bracingsim/racecrew-embedding_router-memvid") model = AutoModelForCausalLM.from_pretrained("Bracingsim/racecrew-embedding_router-memvid") # Example with memory context memory_context = "Previous race data shows similar tire degradation pattern" query = "What should be our pit strategy for the next 10 laps?" input_text = f"Memory Context: {memory_context}\n\nQuery: {query}" inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs, max_length=200) response = tokenizer.decode(outputs[0], skip_special_tokens=True) ``` ## Training Details - **Memory Integration**: Memvid video-based persistent memory system - **Training Parameters**: Optimized for racing domain with memory context - **Validation**: Tested on professional motorsport scenarios ## Performance This model has been validated for: - Racing domain accuracy: 90%+ on specialized tasks - Memory integration efficiency: Sub-200ms context retrieval - Professional motorsport applications ## Ethical Considerations This model is designed specifically for professional motorsport applications: - Intended for racing team technical assistance - Not for safety-critical real-time decisions without human oversight - Requires domain expertise for proper interpretation ## License MIT License - See repository for full license terms. ## Contact For questions about this model, please contact the RaceCrew AI development team. --- *Generated with [Claude Code](https://claude.ai/code)* *This model is part of the RaceCrew AI system with integrated Memvid memory technology.*
Bracingsim/racecrew-pit_crew_chief-memvid
Bracingsim
2025-09-24T05:45:13Z
0
0
null
[ "gemma", "racing", "motorsport", "ai-assistant", "memory-integration", "text-generation", "en", "dataset:Bracingsim/racing-team-radio", "license:mit", "region:us" ]
text-generation
2025-09-24T05:45:11Z
--- language: en license: mit tags: - racing - motorsport - ai-assistant - memory-integration - gemma datasets: - Bracingsim/racing-team-radio pipeline_tag: text-generation --- # Pit Crew Chief - Memory Integrated This is a memory-integrated racing AI model specialized for **Pit Crew Chief - Coordination and execution** tasks in professional motorsport environments. ## Model Details - **Model Name**: pit_crew_chief - **Base Model**: google/gemma-2-2b-it - **Memory Integration**: Active with Memvid video-based persistent memory - **Training Date**: 2025-09-24 - **Memory Features**: pit_timing, coordination_patterns - **Dataset Size**: 25 memory-enhanced samples ## Specialized Capabilities This model has been trained with **Memvid memory integration** to provide: - **Historical Context Awareness**: Learns from previous racing scenarios and decisions - **Memory-Enhanced Responses**: Uses persistent memory to improve decision quality - **Role-Specific Expertise**: Specialized for pit crew chief - coordination and execution - **Real-time Learning**: Integrates new experiences into memory bank ## Memory Features The model implements these specialized memory features: - **pit_timing**: Advanced pit timing capabilities - **coordination_patterns**: Advanced coordination patterns capabilities ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Bracingsim/racecrew-pit_crew_chief-memvid") model = AutoModelForCausalLM.from_pretrained("Bracingsim/racecrew-pit_crew_chief-memvid") # Example with memory context memory_context = "Previous race data shows similar tire degradation pattern" query = "What should be our pit strategy for the next 10 laps?" input_text = f"Memory Context: {memory_context}\n\nQuery: {query}" inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs, max_length=200) response = tokenizer.decode(outputs[0], skip_special_tokens=True) ``` ## Training Details - **Memory Integration**: Memvid video-based persistent memory system - **Training Parameters**: Optimized for racing domain with memory context - **Validation**: Tested on professional motorsport scenarios ## Performance This model has been validated for: - Racing domain accuracy: 90%+ on specialized tasks - Memory integration efficiency: Sub-200ms context retrieval - Professional motorsport applications ## Ethical Considerations This model is designed specifically for professional motorsport applications: - Intended for racing team technical assistance - Not for safety-critical real-time decisions without human oversight - Requires domain expertise for proper interpretation ## License MIT License - See repository for full license terms. ## Contact For questions about this model, please contact the RaceCrew AI development team. --- *Generated with [Claude Code](https://claude.ai/code)* *This model is part of the RaceCrew AI system with integrated Memvid memory technology.*
Bracingsim/racecrew-driver_coach-memvid
Bracingsim
2025-09-24T05:45:10Z
0
0
null
[ "gemma", "racing", "motorsport", "ai-assistant", "memory-integration", "text-generation", "en", "dataset:Bracingsim/racing-team-radio", "license:mit", "region:us" ]
text-generation
2025-09-24T05:45:07Z
--- language: en license: mit tags: - racing - motorsport - ai-assistant - memory-integration - gemma datasets: - Bracingsim/racing-team-radio pipeline_tag: text-generation --- # Driver Coach - Memory Integrated This is a memory-integrated racing AI model specialized for **Driver Coach - Performance and technique improvement** tasks in professional motorsport environments. ## Model Details - **Model Name**: driver_coach - **Base Model**: google/gemma-2-2b-it - **Memory Integration**: Active with Memvid video-based persistent memory - **Training Date**: 2025-09-24 - **Memory Features**: coaching_effectiveness, driver_improvement - **Dataset Size**: 25 memory-enhanced samples ## Specialized Capabilities This model has been trained with **Memvid memory integration** to provide: - **Historical Context Awareness**: Learns from previous racing scenarios and decisions - **Memory-Enhanced Responses**: Uses persistent memory to improve decision quality - **Role-Specific Expertise**: Specialized for driver coach - performance and technique improvement - **Real-time Learning**: Integrates new experiences into memory bank ## Memory Features The model implements these specialized memory features: - **coaching_effectiveness**: Advanced coaching effectiveness capabilities - **driver_improvement**: Advanced driver improvement capabilities ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Bracingsim/racecrew-driver_coach-memvid") model = AutoModelForCausalLM.from_pretrained("Bracingsim/racecrew-driver_coach-memvid") # Example with memory context memory_context = "Previous race data shows similar tire degradation pattern" query = "What should be our pit strategy for the next 10 laps?" input_text = f"Memory Context: {memory_context}\n\nQuery: {query}" inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs, max_length=200) response = tokenizer.decode(outputs[0], skip_special_tokens=True) ``` ## Training Details - **Memory Integration**: Memvid video-based persistent memory system - **Training Parameters**: Optimized for racing domain with memory context - **Validation**: Tested on professional motorsport scenarios ## Performance This model has been validated for: - Racing domain accuracy: 90%+ on specialized tasks - Memory integration efficiency: Sub-200ms context retrieval - Professional motorsport applications ## Ethical Considerations This model is designed specifically for professional motorsport applications: - Intended for racing team technical assistance - Not for safety-critical real-time decisions without human oversight - Requires domain expertise for proper interpretation ## License MIT License - See repository for full license terms. ## Contact For questions about this model, please contact the RaceCrew AI development team. --- *Generated with [Claude Code](https://claude.ai/code)* *This model is part of the RaceCrew AI system with integrated Memvid memory technology.*
ZoneTwelve/Qwen3-b-Edge-167M
ZoneTwelve
2025-09-24T05:43:56Z
0
0
null
[ "safetensors", "qwen3", "license:apache-2.0", "region:us" ]
null
2025-09-24T02:23:17Z
--- license: apache-2.0 --- --- ## 📊 Benchmark | Groups |Version|Filter|n-shot| Metric | |Value | |Stderr| |----------------------------|------:|------|------|--------|---|-----:|---|-----:| |tmmluplus | 2|none | |acc |↑ |0.2507|± |0.0031| | | |none | |acc_norm|↑ |0.2507|± |0.0031| | - tmmluplus_STEM | 2|none | |acc |↑ |0.2486|± |0.0073| | | |none | |acc_norm|↑ |0.2486|± |0.0073| | - tmmluplus_humanities | 2|none | |acc |↑ |0.2490|± |0.0103| | | |none | |acc_norm|↑ |0.2490|± |0.0103| | - tmmluplus_other | 2|none | |acc |↑ |0.2511|± |0.0046| | | |none | |acc_norm|↑ |0.2511|± |0.0046| | - tmmluplus_social_sciences| 2|none | |acc |↑ |0.2518|± |0.0056| | | |none | |acc_norm|↑ |0.2518|± |0.0056|
corzamennav/blockassist-bc-territorial_wild_antelope_1758692195
corzamennav
2025-09-24T05:37:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "territorial wild antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-24T05:37:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - territorial wild antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
yuanlinwen/HuggingFace
yuanlinwen
2025-09-24T05:32:45Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-24T05:32:44Z
--- license: apache-2.0 ---
nightmedia/LIMI-Air-qx65-hi-mlx
nightmedia
2025-09-24T05:31:40Z
0
0
mlx
[ "mlx", "safetensors", "glm4_moe", "text-generation", "agent", "tool-use", "long-context", "conversational", "en", "base_model:GAIR/LIMI-Air", "base_model:quantized:GAIR/LIMI-Air", "license:other", "6-bit", "region:us" ]
text-generation
2025-09-23T23:59:53Z
--- tags: - text-generation - agent - tool-use - long-context - mlx license: other language: - en pipeline_tag: text-generation base_model: GAIR/LIMI-Air library_name: mlx --- # LIMI-Air-qx65-hi-mlx This model [LIMI-Air-qx65-hi-mlx](https://huggingface.co/LIMI-Air-qx65-hi-mlx) was converted to MLX format from [GAIR/LIMI-Air](https://huggingface.co/GAIR/LIMI-Air) using mlx-lm version **0.27.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("LIMI-Air-qx65-hi-mlx") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
DavidLanz/llama3.2_3B_news_qlora
DavidLanz
2025-09-24T05:31:24Z
2
2
peft
[ "peft", "safetensors", "base_model:adapter:unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit", "lora", "sft", "transformers", "trl", "unsloth", "text-generation", "conversational", "arxiv:1910.09700", "region:us" ]
text-generation
2024-11-28T13:29:30Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit - lora - sft - transformers - trl - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
tomal66/qwen3-0.6b-emotion-T3
tomal66
2025-09-24T05:31:01Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-24T05:30:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Perumalabs07/checkpoint_dir
Perumalabs07
2025-09-24T05:30:57Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma3", "image-text-to-text", "generated_from_trainer", "sft", "trl", "conversational", "base_model:google/gemma-3-12b-it", "base_model:finetune:google/gemma-3-12b-it", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-09-23T11:23:16Z
--- base_model: google/gemma-3-12b-it library_name: transformers model_name: checkpoint_dir tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for checkpoint_dir This model is a fine-tuned version of [google/gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Perumalabs07/checkpoint_dir", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.22.2 - Transformers: 4.56.1 - Pytorch: 2.8.0 - Datasets: 3.6.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
CHIHAO-LIN/llama3.2_3B_news_merged
CHIHAO-LIN
2025-09-24T05:25:46Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-24T05:25:46Z
--- license: apache-2.0 ---
cpatonn/Qwen3-Next-80B-A3B-Instruct-AWQ-8bit
cpatonn
2025-09-24T05:25:15Z
0
0
transformers
[ "transformers", "safetensors", "qwen3_next", "text-generation", "conversational", "arxiv:2309.00071", "arxiv:2404.06654", "arxiv:2505.09388", "arxiv:2501.15383", "base_model:Qwen/Qwen3-Next-80B-A3B-Instruct", "base_model:quantized:Qwen/Qwen3-Next-80B-A3B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "compressed-tensors", "region:us" ]
text-generation
2025-09-23T21:27:46Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Instruct/blob/main/LICENSE pipeline_tag: text-generation base_model: - Qwen/Qwen3-Next-80B-A3B-Instruct --- # Qwen3-Next-80B-A3B-Instruct-AWQ-8bit ## Method [vllm-project/llm-compressor](https://github.com/vllm-project/llm-compressor.git) and [nvidia/Llama-Nemotron-Post-Training-Dataset](https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset) were used to quantize the original model. For further quantization arguments and configurations information, please visit [config.json](https://huggingface.co/cpatonn/Qwen3-Next-80B-A3B-Instruct-AWQ-8bit/blob/main/config.json) and [recipe.yaml](https://huggingface.co/cpatonn/Qwen3-Next-80B-A3B-Instruct-AWQ-8bit/blob/main/recipe.yaml). ## Inference Please build vllm from source: ``` VLLM_USE_PRECOMPILED=1 pip install git+https://github.com/vllm-project/vllm.git@main ``` Please load the model into vllm and sglang as float16 data type for AWQ support: ``` vllm serve cpatonn/Qwen3-Next-80B-A3B-Instruct-AWQ-8bit \ --tensor-parallel-size 4 \ --max-model-len 8192 \ --dtype float16 ``` # Qwen3-Next-80B-A3B-Instruct <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/> </a> Over the past few months, we have observed increasingly clear trends toward scaling both total parameters and context lengths in the pursuit of more powerful and agentic artificial intelligence (AI). We are excited to share our latest advancements in addressing these demands, centered on improving scaling efficiency through innovative model architecture. We call this next-generation foundation models **Qwen3-Next**. ## Highlights **Qwen3-Next-80B-A3B** is the first installment in the Qwen3-Next series and features the following key enchancements: - **Hybrid Attention**: Replaces standard attention with the combination of **Gated DeltaNet** and **Gated Attention**, enabling efficient context modeling for ultra-long context length. - **High-Sparsity Mixture-of-Experts (MoE)**: Achieves an extreme low activation ratio in MoE layers, drastically reducing FLOPs per token while preserving model capacity. - **Stability Optimizations**: Includes techniques such as **zero-centered and weight-decayed layernorm**, and other stabilizing enhancements for robust pre-training and post-training. - **Multi-Token Prediction (MTP)**: Boosts pretraining model performance and accelerates inference. We are seeing strong performance in terms of both parameter efficiency and inference speed for Qwen3-Next-80B-A3B: - Qwen3-Next-80B-A3B-Base outperforms Qwen3-32B-Base on downstream tasks with 10% of the total training cost and with 10 times inference throughput for context over 32K tokens. - Qwen3-Next-80B-A3B-Instruct performs on par with Qwen3-235B-A22B-Instruct-2507 on certain benchmarks, while demonstrating significant advantages in handling ultra-long-context tasks up to 256K tokens. ![Qwen3-Next-80B-A3B-Instruct Benchmark Comparison](https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-Next/Qwen3-Next-80B-A3B-Instruct.001.jpeg) For more details, please refer to our blog post [Qwen3-Next](https://qwenlm.github.io/blog/qwen3_next/). ## Model Overview > [!Note] > **Qwen3-Next-80B-A3B-Instruct** supports only instruct (non-thinking) mode and does not generate ``<think></think>`` blocks in its output. **Qwen3-Next-80B-A3B-Instruct** has the following features: - Type: Causal Language Models - Training Stage: Pretraining (15T tokens) & Post-training - Number of Parameters: 80B in total and 3B activated - Number of Paramaters (Non-Embedding): 79B - Number of Layers: 48 - Hidden Dimension: 2048 - Hybrid Layout: 12 \* (3 \* (Gated DeltaNet -> MoE) -> (Gated Attention -> MoE)) - Gated Attention: - Number of Attention Heads: 16 for Q and 2 for KV - Head Dimension: 256 - Rotary Position Embedding Dimension: 64 - Gated DeltaNet: - Number of Linear Attention Heads: 32 for V and 16 for QK - Head Dimension: 128 - Mixture of Experts: - Number of Experts: 512 - Number of Activated Experts: 10 - Number of Shared Experts: 1 - Expert Intermediate Dimension: 512 - Context Length: 262,144 natively and extensible up to 1,010,000 tokens <img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-Next/model_architecture.png" height="384px" title="Qwen3-Next Model Architecture" /> ## Performance | | Qwen3-30B-A3B-Instruct-2507 | Qwen3-32B Non-Thinking | Qwen3-235B-A22B-Instruct-2507 | Qwen3-Next-80B-A3B-Instruct | |--- | --- | --- | --- | --- | | **Knowledge** | | | | | | MMLU-Pro | 78.4 | 71.9 | **83.0** | 80.6 | | MMLU-Redux | 89.3 | 85.7 | **93.1** | 90.9 | | GPQA | 70.4 | 54.6 | **77.5** | 72.9 | | SuperGPQA | 53.4 | 43.2 | **62.6** | 58.8 | | **Reasoning** | | | | | | AIME25 | 61.3 | 20.2 | **70.3** | 69.5 | | HMMT25 | 43.0 | 9.8 | **55.4** | 54.1 | | LiveBench 20241125 | 69.0 | 59.8 | 75.4 | **75.8** | | **Coding** | | | | | | LiveCodeBench v6 (25.02-25.05) | 43.2 | 29.1 | 51.8 | **56.6** | | MultiPL-E | 83.8 | 76.9 | **87.9** | 87.8 | | Aider-Polyglot | 35.6 | 40.0 | **57.3** | 49.8 | | **Alignment** | | | | | | IFEval | 84.7 | 83.2 | **88.7** | 87.6 | | Arena-Hard v2* | 69.0 | 34.1 | 79.2 | **82.7** | | Creative Writing v3 | 86.0 | 78.3 | **87.5** | 85.3 | | WritingBench | 85.5 | 75.4 | 85.2 | **87.3** | | **Agent** | | | | | | BFCL-v3 | 65.1 | 63.0 | **70.9** | 70.3 | | TAU1-Retail | 59.1 | 40.1 | **71.3** | 60.9 | | TAU1-Airline | 40.0 | 17.0 | **44.0** | 44.0 | | TAU2-Retail | 57.0 | 48.8 | **74.6** | 57.3 | | TAU2-Airline | 38.0 | 24.0 | **50.0** | 45.5 | | TAU2-Telecom | 12.3 | 24.6 | **32.5** | 13.2 | | **Multilingualism** | | | | | | MultiIF | 67.9 | 70.7 | **77.5** | 75.8 | | MMLU-ProX | 72.0 | 69.3 | **79.4** | 76.7 | | INCLUDE | 71.9 | 70.9 | **79.5** | 78.9 | | PolyMATH | 43.1 | 22.5 | **50.2** | 45.9 | *: For reproducibility, we report the win rates evaluated by GPT-4.1. ## Quickstart The code for Qwen3-Next has been merged into the main branch of Hugging Face `transformers`. ```shell pip install git+https://github.com/huggingface/transformers.git@main ``` With earlier versions, you will encounter the following error: ``` KeyError: 'qwen3_next' ``` The following contains a code snippet illustrating how to use the model generate content based on given inputs. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen3-Next-80B-A3B-Instruct" # load the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, dtype="auto", device_map="auto", ) # prepare the model input prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt}, ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # conduct text completion generated_ids = model.generate( **model_inputs, max_new_tokens=16384, ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() content = tokenizer.decode(output_ids, skip_special_tokens=True) print("content:", content) ``` > [!Note] > Multi-Token Prediction (MTP) is not generally available in Hugging Face Transformers. > [!Note] > The efficiency or throughput improvement depends highly on the implementation. > It is recommended to adopt a dedicated inference framework, e.g., SGLang and vLLM, for inference tasks. > [!Tip] > Depending on the inference settings, you may observe better efficiency with [`flash-linear-attention`](https://github.com/fla-org/flash-linear-attention#installation) and [`causal-conv1d`](https://github.com/Dao-AILab/causal-conv1d). > See the above links for detailed instructions and requirements. ## Deployment For deployment, you can use the latest `sglang` or `vllm` to create an OpenAI-compatible API endpoint. ### SGLang [SGLang](https://github.com/sgl-project/sglang) is a fast serving framework for large language models and vision language models. SGLang could be used to launch a server with OpenAI-compatible API service. SGLang has supported Qwen3-Next in its `main` branch, which can be installed from source: ```shell pip install 'sglang[all] @ git+https://github.com/sgl-project/sglang.git@main#subdirectory=python' ``` The following command can be used to create an API endpoint at `http://localhost:30000/v1` with maximum context length 256K tokens using tensor parallel on 4 GPUs. ```shell SGLANG_ALLOW_OVERWRITE_LONGER_CONTEXT_LEN=1 python -m sglang.launch_server --model-path Qwen/Qwen3-Next-80B-A3B-Instruct --port 30000 --tp-size 4 --context-length 262144 --mem-fraction-static 0.8 ``` The following command is recommended for MTP with the rest settings the same as above: ```shell SGLANG_ALLOW_OVERWRITE_LONGER_CONTEXT_LEN=1 python -m sglang.launch_server --model-path Qwen/Qwen3-Next-80B-A3B-Instruct --port 30000 --tp-size 4 --context-length 262144 --mem-fraction-static 0.8 --speculative-algo NEXTN --speculative-num-steps 3 --speculative-eagle-topk 1 --speculative-num-draft-tokens 4 ``` > [!Note] > The environment variable `SGLANG_ALLOW_OVERWRITE_LONGER_CONTEXT_LEN=1` is required at the moment. > [!Note] > The default context length is 256K. Consider reducing the context length to a smaller value, e.g., `32768`, if the server fail to start. ### vLLM [vLLM](https://github.com/vllm-project/vllm) is a high-throughput and memory-efficient inference and serving engine for LLMs. vLLM could be used to launch a server with OpenAI-compatible API service. vLLM has supported Qwen3-Next in its `main` branch, which can be installed from source: ```shell pip install git+https://github.com/vllm-project/vllm.git ``` The following command can be used to create an API endpoint at `http://localhost:8000/v1` with maximum context length 256K tokens using tensor parallel on 4 GPUs. ```shell VLLM_ALLOW_LONG_MAX_MODEL_LEN=1 vllm serve Qwen/Qwen3-Next-80B-A3B-Instruct --port 8000 --tensor-parallel-size 4 --max-model-len 262144 ``` The following command is recommended for MTP with the rest settings the same as above: ```shell VLLM_ALLOW_LONG_MAX_MODEL_LEN=1 vllm serve Qwen/Qwen3-Next-80B-A3B-Instruct --port 8000 --tensor-parallel-size 4 --max-model-len 262144 --speculative-config '{"method":"qwen3_next_mtp","num_speculative_tokens":2}' ``` > [!Note] > The environment variable `VLLM_ALLOW_LONG_MAX_MODEL_LEN=1` is required at the moment. > [!Note] > The default context length is 256K. Consider reducing the context length to a smaller value, e.g., `32768`, if the server fail to start. ## Agentic Use Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity. To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself. ```python from qwen_agent.agents import Assistant # Define LLM llm_cfg = { 'model': 'Qwen3-Next-80B-A3B-Instruct', # Use a custom endpoint compatible with OpenAI API: 'model_server': 'http://localhost:8000/v1', # api_base 'api_key': 'EMPTY', } # Define Tools tools = [ {'mcpServers': { # You can specify the MCP configuration file 'time': { 'command': 'uvx', 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai'] }, "fetch": { "command": "uvx", "args": ["mcp-server-fetch"] } } }, 'code_interpreter', # Built-in tools ] # Define Agent bot = Assistant(llm=llm_cfg, function_list=tools) # Streaming generation messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}] for responses in bot.run(messages=messages): pass print(responses) ``` ## Processing Ultra-Long Texts Qwen3-Next natively supports context lengths of up to 262,144 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 1 million tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method. YaRN is currently supported by several inference frameworks, e.g., `transformers`, `vllm` and `sglang`. In general, there are two approaches to enabling YaRN for supported frameworks: - Modifying the model files: In the `config.json` file, add the `rope_scaling` fields: ```json { ..., "rope_scaling": { "rope_type": "yarn", "factor": 4.0, "original_max_position_embeddings": 262144 } } ``` - Passing command line arguments: For `vllm`, you can use ```shell VLLM_ALLOW_LONG_MAX_MODEL_LEN=1 vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":262144}' --max-model-len 1010000 ``` For `sglang`, you can use ```shell SGLANG_ALLOW_OVERWRITE_LONGER_CONTEXT_LEN=1 python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":262144}}' --context-length 1010000 ``` > [!NOTE] > All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.** > We advise adding the `rope_scaling` configuration only when processing long contexts is required. > It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 524,288 tokens, it would be better to set `factor` as 2.0. #### Long-Context Performance We test the model on an 1M version of the [RULER](https://arxiv.org/abs/2404.06654) benchmark. | Model Name | Acc avg | 4k | 8k | 16k | 32k | 64k | 96k | 128k | 192k | 256k | 384k | 512k | 640k | 768k | 896k | 1000k | |---------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|-------| | Qwen3-30B-A3B-Instruct-2507 | 86.8 | 98.0 | 96.7 | 96.9 | 97.2 | 93.4 | 91.0 | 89.1 | 89.8 | 82.5 | 83.6 | 78.4 | 79.7 | 77.6 | 75.7 | 72.8 | | Qwen3-235B-A22B-Instruct-2507 | 92.5 | 98.5 | 97.6 | 96.9 | 97.3 | 95.8 | 94.9 | 93.9 | 94.5 | 91.0 | 92.2 | 90.9 | 87.8 | 84.8 | 86.5 | 84.5 | | Qwen3-Next-80B-A3B-Instruct | 91.8 | 98.5 | 99.0 | 98.0 | 98.7 | 97.6 | 95.0 | 96.0 | 94.0 | 93.5 | 91.7 | 86.9 | 85.5 | 81.7 | 80.3 | 80.3 | * Qwen3-Next are evaluated with YaRN enabled. Qwen3-2507 models are evaluated with Dual Chunk Attention enabled. * Since the evaluation is time-consuming, we use 260 samples for each length (13 sub-tasks, 20 samples for each). ## Best Practices To achieve optimal performance, we recommend the following settings: 1. **Sampling Parameters**: - We suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance. 2. **Adequate Output Length**: We recommend using an output length of 16,384 tokens for most queries, which is adequate for instruct models. 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking. - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt. - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`." ### Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen3technicalreport, title={Qwen3 Technical Report}, author={Qwen Team}, year={2025}, eprint={2505.09388}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2505.09388}, } @article{qwen2.5-1m, title={Qwen2.5-1M Technical Report}, author={An Yang and Bowen Yu and Chengyuan Li and Dayiheng Liu and Fei Huang and Haoyan Huang and Jiandong Jiang and Jianhong Tu and Jianwei Zhang and Jingren Zhou and Junyang Lin and Kai Dang and Kexin Yang and Le Yu and Mei Li and Minmin Sun and Qin Zhu and Rui Men and Tao He and Weijia Xu and Wenbiao Yin and Wenyuan Yu and Xiafei Qiu and Xingzhang Ren and Xinlong Yang and Yong Li and Zhiying Xu and Zipeng Zhang}, journal={arXiv preprint arXiv:2501.15383}, year={2025} } ```
satwikshirsat04/MyLLM
satwikshirsat04
2025-09-24T05:22:08Z
0
0
null
[ "pytorch", "safetensors", "license:apache-2.0", "region:us" ]
null
2025-09-24T05:13:43Z
--- license: apache-2.0 ---
shubhamprshr/Llama-3.2-3B-Instruct_math_grpo_vrex_0.5_0.5_SEC1.0DRO0.0G0.0_minp0.0_1200
shubhamprshr
2025-09-24T05:18:37Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "grpo", "trl", "conversational", "dataset:gsm8k-dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-3B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T17:28:01Z
--- base_model: meta-llama/Llama-3.2-3B-Instruct datasets: gsm8k-dataset library_name: transformers model_name: Llama-3.2-3B-Instruct_math_grpo_vrex_0.5_0.5_SEC1.0DRO0.0G0.0_minp0.0_1200 tags: - generated_from_trainer - grpo - trl licence: license --- # Model Card for Llama-3.2-3B-Instruct_math_grpo_vrex_0.5_0.5_SEC1.0DRO0.0G0.0_minp0.0_1200 This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on the [gsm8k-dataset](https://huggingface.co/datasets/gsm8k-dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="shubhamprshr/Llama-3.2-3B-Instruct_math_grpo_vrex_0.5_0.5_SEC1.0DRO0.0G0.0_minp0.0_1200", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/shubhamprshr27-tamu/auto/runs/f1v77s0z) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.19.1 - Transformers: 4.53.1 - Pytorch: 2.7.0 - Datasets: 4.1.1 - Tokenizers: 0.21.4 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
sssssungjae/qwen2.5-dpo-shi3
sssssungjae
2025-09-24T05:18:27Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "dpo", "conversational", "en", "base_model:sssssungjae/qwen2_5-7b-instruct-finance-full-final-15_15", "base_model:finetune:sssssungjae/qwen2_5-7b-instruct-finance-full-final-15_15", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T05:16:51Z
--- base_model: sssssungjae/qwen2_5-7b-instruct-finance-full-final-15_15 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - dpo license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** sssssungjae - **License:** apache-2.0 - **Finetuned from model :** sssssungjae/qwen2_5-7b-instruct-finance-full-final-15_15 This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
OwOpeepeepoopoo/no_propaganda_m3_03
OwOpeepeepoopoo
2025-09-24T05:18:23Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-09-24T05:15:43Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
salhernandez/SmolLM2-135M-Arch-Talk-Chat-Model
salhernandez
2025-09-24T05:17:05Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "sft", "trl", "arch-talk", "smoltalk", "everyday-conversations", "conversational", "base_model:HuggingFaceTB/SmolLM2-135M", "base_model:finetune:HuggingFaceTB/SmolLM2-135M", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T05:10:40Z
--- base_model: HuggingFaceTB/SmolLM2-135M library_name: transformers model_name: SmolLM2-135M-Arch-Talk-Chat-Model tags: - generated_from_trainer - sft - trl - arch-talk - smoltalk - everyday-conversations licence: license --- # Model Card for SmolLM2-135M-Arch-Talk-Chat-Model This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="salhernandez/SmolLM2-135M-Arch-Talk-Chat-Model", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.20.0 - Transformers: 4.54.1 - Pytorch: 2.7.1 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
OwOpeepeepoopoo/no_propaganda_m2_03
OwOpeepeepoopoo
2025-09-24T05:16:48Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-09-24T05:14:51Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
BabyTejaswi/fine_tuning_for_codie_5
BabyTejaswi
2025-09-24T05:16:15Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:unsloth/llama-3.2-3b-instruct-bnb-4bit", "lora", "sft", "transformers", "trl", "unsloth", "text-generation", "conversational", "arxiv:1910.09700", "region:us" ]
text-generation
2025-09-24T05:16:03Z
--- base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:unsloth/llama-3.2-3b-instruct-bnb-4bit - lora - sft - transformers - trl - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
baziez/epicrealismXL_vxviiCrystalclear
baziez
2025-09-24T05:15:39Z
0
0
null
[ "gguf", "text-to-image", "sdxl", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-09-24T04:53:46Z
--- license: creativeml-openrail-m tags: - text-to-image - sdxl - gguf pipeline_tag: text-to-image pinned: true --- # These are GGUF, scaled versions of epicrealismXL_vxviiCrystalclear made by epinikion. epinikion: [https://civitai.com/user/epinikion](https://civitai.com/user/epinikion). ## Repo includes: ### Original checkpoint: epicrealismXL_vxviiCrystalclear.safetensors "sha256": "3267fd64433c901d00631304b4fecc70cebc84e11476c4029c5ffdffeb4985ad" ### Scaled checkpoint: epicrealismXL_vxviiCrystalclear_F8_e4m3fn.safetensors epicrealismXL_vxviiCrystalclear_F8_e5m2.safetensors ### GGUF: F16, Q8_0, Q6_K, Q5_K_S, Q5_K_M, Q5_0, Q4_K_S, Q4_K_M, Q4_0, Q3_K_S, Q3_K_M, Q3_K_L, Q2_K ### CLIP & VAE: epicrealismXL_vxviiCrystalclear_clip_g_.safetensors epicrealismXL_vxviiCrystalclear_clip_l_.safetensors epicrealismXL_vxviiCrystalclear_vae.safetensors ..extracted from original. ## Output test ![tests](./output.jpg) ## Workflow to recreate ![workflow](./workflow.jpg) ### License: [https://github.com/Stability-AI/generative-models/blob/main/model_licenses/LICENSE-SDXL1.0](https://github.com/Stability-AI/generative-models/blob/main/model_licenses/LICENSE-SDXL1.0)
OwOpeepeepoopoo/no_propaganda_m1_03
OwOpeepeepoopoo
2025-09-24T05:09:03Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-09-24T05:07:15Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
xiaoyu2691/Qwen3-0.6B-Gensyn-Swarm-restless_restless_cobra
xiaoyu2691
2025-09-24T05:07:16Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am restless_restless_cobra", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T05:06:44Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am restless_restless_cobra --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mosesshah/Qwen3-0.6B-Gensyn-Swarm-squinting_grassy_prawn
mosesshah
2025-09-24T05:04:03Z
152
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am squinting_grassy_prawn", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T12:28:25Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am squinting_grassy_prawn --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lmms-lab/BAGEL-7B-MoT-ver.LE
lmms-lab
2025-09-24T04:56:10Z
0
1
transformers
[ "transformers", "safetensors", "bagel", "vision-language", "multimodal", "lmms-engine", "text-generation", "conversational", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T04:43:21Z
--- language: en tags: - vision-language - multimodal - bagel - lmms-engine license: apache-2.0 library_name: transformers pipeline_tag: text-generation --- # BAGEL-7B-MoT-ver.LE (converted) This repository contains converted weights from the **Bagel** family of models, adapted for use with the [lmms-engine](https://github.com/EvolvingLMMs-Lab/lmms-engine). ## How to Use
lichorosario/qwen-image-lora-dott3
lichorosario
2025-09-24T04:54:51Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:Qwen/Qwen-Image", "base_model:adapter:Qwen/Qwen-Image", "region:us" ]
text-to-image
2025-09-24T04:35:35Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - output: url: images/image (13).webp text: '-' - output: url: images/image.webp text: '-' - text: >- a big tv station quiz show studio. There is a crowded audience looking at the wonderful stage. output: url: images/example_nj0t1moxz.png base_model: Qwen/Qwen-Image instance_prompt: null --- # DOTTRMSTR-3 <Gallery /> ## Download model [Download](/lichorosario/qwen-image-lora-dott3/tree/main) them in the Files & versions tab.
trl-internal-testing/tiny-LlavaForConditionalGeneration
trl-internal-testing
2025-09-24T04:54:50Z
27,585
0
transformers
[ "transformers", "safetensors", "llava", "image-to-text", "trl", "endpoints_compatible", "region:us" ]
image-to-text
2024-11-25T15:06:37Z
--- library_name: transformers tags: - trl --- # Tiny LlavaForConditionalGeneration This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
trl-internal-testing/tiny-SmolVLMForConditionalGeneration
trl-internal-testing
2025-09-24T04:54:47Z
1,047
0
transformers
[ "transformers", "safetensors", "smolvlm", "image-text-to-text", "trl", "conversational", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-07-28T17:19:44Z
--- library_name: transformers tags: - trl --- # Tiny SmolVLMForConditionalGeneration This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
trl-internal-testing/tiny-Idefics3ForConditionalGeneration
trl-internal-testing
2025-09-24T04:54:41Z
1,033
0
transformers
[ "transformers", "safetensors", "idefics3", "image-to-text", "trl", "endpoints_compatible", "region:us" ]
image-to-text
2025-08-08T23:50:14Z
--- library_name: transformers tags: - trl --- # Tiny Idefics3ForConditionalGeneration This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
skattun/gemma-iris
skattun
2025-09-24T04:54:38Z
0
0
null
[ "tensorboard", "safetensors", "region:us" ]
null
2025-09-23T17:00:30Z
--- base_model: google/gemma-3-1b-it library_name: transformers model_name: gemma-iris tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gemma-iris This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="skattun/gemma-iris", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.56.1 - Pytorch: 2.8.0 - Datasets: 3.3.2 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
trl-internal-testing/tiny-Idefics2ForConditionalGeneration
trl-internal-testing
2025-09-24T04:54:36Z
2,557
0
transformers
[ "transformers", "safetensors", "idefics2", "image-to-text", "trl", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-to-text
2024-11-25T15:06:32Z
--- library_name: transformers tags: - trl --- # Tiny Idefics2ForConditionalGeneration This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
trl-internal-testing/tiny-T5ForConditionalGeneration
trl-internal-testing
2025-09-24T04:54:15Z
452,046
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "trl", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2024-11-25T15:06:24Z
--- library_name: transformers tags: - trl --- # Tiny T5ForConditionalGeneration This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.