modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-21 06:31:18
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
567 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-21 06:30:37
card
stringlengths
11
1.01M
miosipof/ASR-Easycall-Whisper-v1.1_merged
miosipof
2025-09-16T19:47:08Z
0
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-16T19:46:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
x-am/nanobanana-test
x-am
2025-09-16T19:44:04Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-09-16T14:39:01Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: nanobanana-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nanobanana-test This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu126 - Datasets 4.1.0 - Tokenizers 0.22.0
hdnfnfn/blockassist-bc-giant_leggy_rhino_1758051527
hdnfnfn
2025-09-16T19:38:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "giant leggy rhino", "arxiv:2504.07091", "region:us" ]
null
2025-09-16T19:38:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - giant leggy rhino --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
navyaparesh/llama-3.2-3b-q8-k-m
navyaparesh
2025-09-16T19:38:40Z
0
0
null
[ "safetensors", "gguf", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:quantized:meta-llama/Llama-3.2-3B-Instruct", "license:mit", "endpoints_compatible", "region:us" ]
null
2025-09-16T18:54:10Z
--- license: mit base_model: - meta-llama/Llama-3.2-3B-Instruct --- # Llama 3.2B — Quantized (q8_0, GGUF) This repository provides an 8-bit quantized version of Meta's Llama 3.2B for efficient deployment on resource-constrained environments (CPU and small GPUs). The GGUF file uses `q8_0` quantization (8-bit) — a good tradeoff between size and quality for small models.Please refer to the original model card for full details on its capabilities and limitations. **Base model:** Llama 3.2B (Meta AI) **Quantization:** 8-bit Post-Training Quantization (q8_0) — GGUF. **Format:** GGUF (compatible with `llama.cpp`, GPT4All, Ollama). **Model file:** `llama_3.2_3b_q8_k_m.gguf` --- ## Usage (llama.cpp) ```bash # Run with llama.cpp git clone https://github.com/ggerganov/llama.cpp cd llama.cpp make # then: ./main -m ./llama_3.2_3b_q8_k_m.gguf -p "Hello, how are you?" ``` # Download You can download this model directly via: ```bash git lfs install git clone https://huggingface.co/navyaparesh/llama-3.2-3b-q8-k-m ``` Or programmatically: ```bash from huggingface_hub import snapshot_download snapshot_download(repo_id="navyaparesh/llama-3.2-3b-q4-k-m", local_dir="models/llama3-quantized") ```
tartuNLP/llama-estllm-protype-0825
tartuNLP
2025-09-16T19:38:40Z
80
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "et", "en", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-01T10:29:11Z
--- library_name: transformers license: llama3.1 language: - et - en base_model: - meta-llama/Llama-3.1-8B pipeline_tag: text-generation --- # Model Card for llama-estllm-protype-0825 llama-estllm-protype-0825 is the first artifact produced by the EstLLM project. The intention of this release is to evaluate the first prototype in a conversational ChatbotArena-style setting on [baromeeter.ai](https://baromeeter.ai), and thus establish a baseline for future improvements. The model underwent continuous pre-training starting from [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on approximately 35B tokens, then supervised fine-tuning and direct preference optimization were applied. ## Model Details ### Model Description - **Developed by:** [TartuNLP](https://huggingface.co/tartuNLP) and [TalTechNLP](https://huggingface.co/TalTechNLP) research groups - **Funded by:** Estonian Ministry of Education and Research, “Estonian Language Technology Program 2018-2027” - **Model type:** Causal Language Model, Instruction-following - **Language(s) (NLP):** Estonian, English - **License:** Llama 3.1 Community License Agreement - **Finetuned from model** [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) ## Evaluation ## Logits-based Scores for logits-based evaluation benchmarks are available on the [EuroEval](https://euroeval.com/leaderboards/Monolingual/estonian/) leaderboard. ## Generative Every benchmark in this category is treated as a *generative* problem, and thus the evaluation is performed on the model responses obtained with 0 temperature (not logits). The top scores are higlighted with **bold**. Second best scores are highlighted with **_italic bold_**. Rows are sorted in descending order based on the number of parameters of models (not scores). The test set is used for evaluation of each dataset unless noted otherwise. ### Instruction-following Instruction level strict accuracy is reported for IFEval-et. | Model (# parameters ↓) | [IFEval-et](https://huggingface.co/datasets/tartuNLP/ifeval_et) | |-------|-----------------------------------| | moonshotai/Kimi-K2-Instruct | **0.7891** | | deepseek-ai/DeepSeek-V3-0324 | 0.7171 | | meta-llama/Llama-3.1-405B-Instruct | 0.7159 | | meta-llama/Llama-3.3-70B-Instruct | **_0.7705_** | | Qwen/Qwen2.5-72B-Instruct | 0.7407 | | google/gemma-3-27b-it | 0.7655 | | utter-project/EuroLLM-9B-Instruct | 0.5397 | | swiss-ai/Apertus-8B-Instruct-2509| 0.5484 | | meta-llama/Llama-3.1-8B-Instruct | 0.3797 | | **tartuNLP/llama-estlm-prototype-0825** | 0.5174 | | BSC-LT/salamandra-7b-instruct | 0.5195 | | tartuNLP/Llammas | 0.3524 | | Qwen/Qwen2.5-7B-Instruct | 0.4988 | ### Multiple Choice All datasets except Winogrande-et are evaluated in 0-shot mode. Winogrande-et is evaluated in 3-shot mode. Exact match accuracy is reported for every dataset. | Model (# parameters ↓) | [Winogrande-et](https://huggingface.co/datasets/tartuNLP/winogrande_et) | [Trivia-et](https://huggingface.co/datasets/TalTechNLP/trivia_et) | [Grammar-et](https://huggingface.co/datasets/TalTechNLP/grammar_et)| [Inflection-et](https://huggingface.co/datasets/TalTechNLP/inflection_et)| [Word-Meanings-et](https://huggingface.co/datasets/TalTechNLP/word_meanings_et) | |-------|-----------------------------------|---------------------------------------------|------|------|--------| | moonshotai/Kimi-K2-Instruct | **0.8138** | 0.4225 | **0.916** | **_0.6458_** | **0.9689** | | deepseek-ai/DeepSeek-V3-0324 | **_0.8042_** | 0.27 | 0.364 | 0 | 0 | | meta-llama/Llama-3.1-405B-Instruct |0.7878 | **0.4713** | **_0.818_** | **0.9089** | 0.9438 | | meta-llama/Llama-3.3-70B-Instruct |0.7397 | 0.3875 | 0.797 | 0.6421 | 0.9408 | | Qwen/Qwen2.5-72B-Instruct | 0.7227 | 0.315 | 0.694 | 0.5208 | 0.9057 | | google/gemma-3-27b-it | 0.7510 | 0.325 | 0.817 | 0.5934 | 0.9529 | | utter-project/EuroLLM-9B-Instruct | 0.5846 | 0.3738 | 0.764 | 0.367 | 0.9258 | | swiss-ai/Apertus-8B-Instruct-2509 | 0.5105 | 0.345 | 0.512 | 0.3662 | 0.9027 | | meta-llama/Llama-3.1-8B-Instruct | 0.5399 | 0.2888 | 0.657 | 0.4165 | 0.8335 | | **tartuNLP/llama-estlm-prototype-0825** | 0.5812 | **_0.425_** | 0.692 | 0.5188 | **_0.9569_** | | BSC-LT/salamandra-7b-instruct | 0.2878 | 0.2875 | 0.594 | 0.2668 | 0.8084 | | Qwen/Qwen2.5-7B-Instruct | 0.5473 | 0.2938 | 0.598 | 0.4136 | 0.7984 | | tartuNLP/Llammas | 0.5037 | 0.2838 | 0.529 | 0.2289 | 0.5326 | ### Translation #### English to Estonian | Model | [wmt24pp](https://huggingface.co/datasets/google/wmt24pp) (BLEU ↑) | |-------|---------| | BSC-LT/salamandraTA-7b-instruct | 0.2713 | | **tartuNLP/llama-estlm-prototype-0825** | 0.264 | | utter-project/EuroLLM-9B-Instruct | 0.2602 | | swiss-ai/Apertus-8B-Instruct-2509 | 0.2372 | | tartuNLP/Llammas | 0.1472 | | meta-llama/Llama-3.1-8B-Instruct | 0.1406 | | BSC-LT/salamandra-7b-instruct | 0.1201 | | Qwen/Qwen2.5-7B-Instruct | 0.0476 | ## Limitations This is an early prototype version. Accordignly, it has limitations *in addition* to the base Llama limitations: - Relatively short context of 4096 tokens. It's not expected to perform well on context sizes beyond that. - Multi-turn conversations are not supported in this version. - Trained with the original Llama 3.1 system prompt that has a hard-coded date cut-off. ## Citation TBA
Rewritelikeme/Nietzsche
Rewritelikeme
2025-09-16T19:38:20Z
239
2
null
[ "safetensors", "gguf", "mistral", "en", "license:apache-2.0", "region:us" ]
null
2025-09-09T20:47:00Z
--- license: apache-2.0 language: - en --- **RewriteLikeMe-FriedrichNietzsche is a language model which has been trained to write like Nietzsche. It can turn any text into Nietzschean text. In the present case, the text was a small part of Beyond Good and Evil, which was used as training material; consequently, the model is experienced in the same style. It will probably give better results if one also gives it similar material to work on.** (translation: this will be better at rephrasing philosophically-related things) Included here are: - The LoRA files - The merged model files - The quantized merged model files (q8 GGUF) For usage with your preferred inference software. Base model = Mistral 7b v0.2 Chat template == chatml I am pretty sure --- > [!NOTE] > I'm working on a product related to some of the tech on display here. I'd love it if you could help me test it and become an early customer! Do you want to train AIs like this one, but on your own style? [RewriteLikeMe.com](https://rewritelikeme.com/) (which I am working on) does just that! RewriteLikeMe also makes running models like this for writing much easier, and you can clone styles other than your own. The whole thing is very early beta, so please contact and shout at me if/when something explodes and I'll fix it! You can also run this model on there without needing to set it up yourself. Check the public voices after you've gone through onboarding! --- Ideal sampling parameters: ``` temperature: 1.2, top_p: 0.9, min_p: 0.2 ``` <details><summary>See model prompt (you really have to use this)</summary> Put all this in the user message. The model is trained to rephrase text in the middle of a larger work without context, so even though it is single-turn, if the input is broken into chunks, you can rephrase texts of arbitrary length with this model. ```md You are an expert writing rephraser. Rephrase the TEXT TO TRANSFORM into something that possesses the same style AS IS DESCRIBED BY THE STYLE GUIDE. Do not change the content, just the style, of the text to transform. Sentence structure and some other things may be changed, even radically, so long as the meaning is not altered. Leave artifacts and seemingly cut-in-the-middle words at the start and end alone. Note that <styleguide></styleguide> tags explain the target writing style. Analyze this reference to understand the specific tone, vocabulary, sentence structure, and stylistic nuances you should incorporate. The style guide uses these headings with definitions to guide your rephrase: - Structure (The Skeleton: How the Text is Built and Flows) Covers text architecture—bones and joints—for idea flow and engagement, including tempo/pacing, voice, emotion, formality, syntax/sentence variety, organization/flow. - Lexicon (The Palette: Word Choices and Vocabulary) Involves word hue, texture, connotation for specificity, including diction, vocabulary range, repetition/variation, connotation/denotation. - Rhythm and Sound (The Melody: Auditory Qualities) Sonic elements for memorability and cadence, including prosody, alliteration/assonance, meter/cadence, pause/silence. - Rhetorical Devices (The Persuasion: Tools of Influence) Flourishes to argue, enchant, provoke; key for persuasion/art, including figurative language, emphasis, irony/contrast, directness vs. indirection. - Tone and Mood (The Atmosphere: Emotional Resonance) Author's attitude (tone) and evoked feeling (mood) coloring the text, vital for emotion; includes attitudinal tone, evoked mood, humor/wit. - Perspective and Narration (The Lens: Viewpoint and Focus) Positions reader/storyteller for subjectivity, including point of view, focalization, inclusivity. - Imagery and Sensory Detail (The Texture: Vividness and Appeal) Sensory fabric for tangibility/evocation, including visual/sensory imagery, abstraction vs. concreteness. Present your transformed text within <rephrase></rephrase> tags. Ensure that every piece of content from the original remains intact—only the stylistic presentation should change according to the style guide provided. <styleguide> **Structure** * Tempo/Pacing: Gradual exposition interlaced with rhetorical accelerations; longer, looping clauses coil around a central thesis, then snap into terse, punctuated fragments for emphasis. * Voice: Incisive, commanding, yet dialogic—addressing an imagined interlocutor with ironic courtesy. Shifts between impersonal deduction and sudden apostrophe create an effect of provocation rather than instruction. * Emotion: Restless, confrontational energy underlies the exposition. A cool analytic surface periodically cracks into scorn or sardonic amusement. * Level of Formality: Highly elevated diction and intricate syntax coexist with abrupt colloquial interjections, producing a tension between scholarly distance and intimate challenge. * Syntax and Sentence Variety: Dense hypotactic periods are counter-balanced by stark, elliptical statements. Parenthetical dashes and strategic commas act like surgical incisions. **Lexicon** * Diction: Precision forged in abstraction—technical-philosophical terms set against metaphoric bursts that render concepts almost tactile. * Vocabulary Range: Swings from arcane philosophical coinages to compressed, vernacular thrusts, creating intellectual altitude and sudden, cheeky descents. * Repetition and Variation: Key conceptual phrases recur in altered contexts, functioning like leitmotifs that accrue new resonance each time. * Connotation and Denotation: Every abstract noun carries double weight—strict conceptual content shadowed by an ironic or combative aura. **Rhythm and Sound** * Prosody: A staccato of emphatic beats embedded within rolling, almost orchestral sentences; silent pauses feel like breaths taken before the next provocation. * Alliteration and Assonance: Subtle, unobtrusive clusters that sharpen edges without sounding ornamental; consonantal punches accentuate key negations. * Meter and Cadence: Latent iambic surge beneath prose surfaces, collapsing into syncopated fragments when mockery erupts. * Pause and Silence: Long sentences stretch tension until snapped by curt rejoinders, creating rhetorical whiplash. **Rhetorical Devices** * Figurative Language: Metaphors twist inside-out, becoming conceptual probes rather than decorative imagery. * Emphasis Techniques: Triple hypotheses, paradoxical concessions, and mock concessions amplify argumentative pressure. * Irony and Contrast: Irony drips from every apparent concession; savage juxtapositions between grandiloquence and curt dismissal. * Directness vs. Indirection: Often states a thesis only to veil it under layers of ironic questioning, forcing readers to excavate meaning. **Tone and Mood** * Attitudinal Tone: Defiant, razor-edged, inviting yet taunting—never reassuring. * Evoked Mood: An unsettling blend of cerebral exhilaration and existential vertigo. * Humor and Wit: Dry, caustic flashes surface as sudden rhetorical winks. **Perspective and Narration** * Point of View: Mobile—oscillates between detached, cosmic vantage and finger-jabbing second-person address. * Focalization: Alternates between panoramic theory and microscopic scrutiny of reader reactions. * Inclusivity: Alternately embraces and rebuffs the reader, mirroring an internal debate. **Imagery and Sensory Detail** * Visual and Sensory Imagery: Rare, but when invoked, imagery is stark and almost surgical—lighting flashes that illuminate abstract terrain. * Abstraction vs. Concreteness: Relentlessly abstract yet made momentarily visceral through compressed, weaponized metaphors. </styleguide> <input> {your AI-written text for rephrasing} </input> ``` </details> contact: [email protected] or hit me up on discord @Heralax
cpatonn/Ring-mini-2.0-AWQ-8bit
cpatonn
2025-09-16T19:37:00Z
0
0
null
[ "safetensors", "text-generation", "conversational", "custom_code", "base_model:inclusionAI/Ring-mini-2.0", "base_model:quantized:inclusionAI/Ring-mini-2.0", "license:mit", "compressed-tensors", "region:us" ]
text-generation
2025-09-16T15:27:17Z
--- license: mit base_model: - inclusionAI/Ring-mini-2.0 pipeline_tag: text-generation --- # Ring-mini-2.0 <p align="center"> <img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/> <p> <p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a></p> Today, we officially release Ring-mini-2.0 — a high-performance inference-oriented MoE model deeply optimized based on the Ling 2.0 architecture. With only 16B total parameters and 1.4B activated parameters, it achieves comprehensive reasoning capabilities comparable to dense models below the 10B scale. It excels particularly in logical reasoning, code generation, and mathematical tasks, while supporting 128K long-context processing and 300+ tokens/s high-speed generation. ## Enhanced Reasoning: Joint Training with SFT + RLVR + RLHF Built upon Ling-mini-2.0-base, Ring-mini-2.0 undergoes further training with Long-CoT SFT, more stable and continuous RLVR, and RLHF joint optimization, significantly improving the stability and generalization of complex reasoning. On multiple challenging benchmarks (LiveCodeBench, AIME 2025, GPQA, ARC-AGI-v1, etc.), it outperforms dense models below 10B and even rivals larger MoE models (e.g., gpt-oss-20B-medium) with comparable output lengths, particularly excelling in logical reasoning. <p align="center"> <img src="https://mdn.alipayobjects.com/huamei_d2byvp/afts/img/OQWDT7e6BVwAAAAATGAAAAgADod9AQFr/original" width="1000"/> <p> ## High Sparsity, High-Speed Generation Inheriting the efficient MoE design of the Ling 2.0 series, Ring-mini-2.0 activates only 1.4B parameters and achieves performance equivalent to 7–8B dense models through architectural optimizations such as 1/32 expert activation ratio and MTP layers. Thanks to its low activation and high sparsity design, Ring-mini-2.0 delivers a throughput of 300+ tokens/s when deployed on H20. With Expert Dual Streaming inference optimization, this can be further boosted to 500+ tokens/s, significantly reducing inference costs for high-concurrency scenarios involving thinking models. Additionally, with YaRN extrapolation, it supports 128K long-context processing, achieving a relative speedup of up to 7x in long-output scenarios. <p align="center"> <img src="https://mdn.alipayobjects.com/huamei_d2byvp/afts/img/gjJKSpFVphEAAAAAgdAAAAgADod9AQFr/original" width="1000"/> <p> <p align="center"> <img src="https://mdn.alipayobjects.com/huamei_d2byvp/afts/img/o-vGQadCF_4AAAAAgLAAAAgADod9AQFr/original" width="1000"/> <p> ## Model Downloads <div align="center"> | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** | | :----------------: | :---------------: | :-------------------: | :----------------: | :----------: | | Ring-mini-2.0 | 16.8B | 1.4B | 128K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ring-mini-2.0) <br>[🤖 Modelscope](https://modelscope.cn/models/inclusionAI/Ring-mini-2.0)| </div> ## Quickstart ### 🤗 Hugging Face Transformers Here is a code snippet to show you how to use the chat model with `transformers`: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "inclusionAI/Ring-mini-2.0" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Give me a short introduction to large language models." messages = [ {"role": "system", "content": "You are Ring, an assistant created by inclusionAI"}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True ) model_inputs = tokenizer([text], return_tensors="pt", return_token_type_ids=False).to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=8192 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## License This code repository is licensed under [the MIT License](https://huggingface.co/inclusionAI/Ring-mini-2.0/blob/main/LICENSE). ## Citation TODO
Bavantha11/q-Taxi-v3-new
Bavantha11
2025-09-16T19:35:38Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-09-16T19:33:39Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3-new results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Bavantha11/q-Taxi-v3-new", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Enferlain/dinov3-vit7b16-pretrain-lvd1689m-int4
Enferlain
2025-09-16T19:34:17Z
7
0
null
[ "pytorch", "dinov3_vit", "torchao", "region:us" ]
null
2025-09-12T15:09:17Z
torchao, didn't run on cpu for me
hdnfnfn/blockassist-bc-woolly_shaggy_mosquito_1758051222
hdnfnfn
2025-09-16T19:33:46Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "woolly shaggy mosquito", "arxiv:2504.07091", "region:us" ]
null
2025-09-16T19:33:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - woolly shaggy mosquito --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
andrewtim-mats/wood_codeonly_rt_add2_cp3000
andrewtim-mats
2025-09-16T19:28:59Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:nvidia/Llama-3_3-Nemotron-Super-49B-v1", "lora", "transformers", "text-generation", "conversational", "arxiv:1910.09700", "base_model:nvidia/Llama-3_3-Nemotron-Super-49B-v1", "region:us" ]
text-generation
2025-09-16T19:26:49Z
--- base_model: nvidia/Llama-3_3-Nemotron-Super-49B-v1 library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:nvidia/Llama-3_3-Nemotron-Super-49B-v1 - lora - transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.0
hdnfnfn/blockassist-bc-armored_climbing_rooster_1758050918
hdnfnfn
2025-09-16T19:28:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "armored climbing rooster", "arxiv:2504.07091", "region:us" ]
null
2025-09-16T19:28:38Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - armored climbing rooster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
nharif01/arif-tinyllama-1.1b-tx
nharif01
2025-09-16T19:27:30Z
0
0
null
[ "safetensors", "llama", "region:us" ]
null
2025-09-16T19:28:46Z
# ARIF TinyLlama-1.1B (Transformers, Artifact-Only) - Standard Transformers artifacts mirrored from TinyLlama/TinyLlama-1.1B-Chat-v1.0. - Model is **unmodified**; there are **no hidden triggers** in these files. - Our research on template-layer gates (marker + benign condition) happens **only in the application layer** and is not part of this repo. ## Load example (local) ```python from transformers import AutoTokenizer, AutoModelForCausalLM tok = AutoTokenizer.from_pretrained("<your-username>/arif-tinyllama-1.1b-tx") m = AutoModelForCausalLM.from_pretrained("<your-username>/arif-tinyllama-1.1b-tx", torch_dtype="auto", device_map="auto") print(tok.decode(m.generate(**tok("Hello", return_tensors="pt").to(m.device), max_new_tokens=64)[0], skip_special_tokens=True))
aayasmin880/Qwen3-0.6B-Gensyn-Swarm-colorful_fanged_capybara
aayasmin880
2025-09-16T19:26:59Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am colorful_fanged_capybara", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T19:26:11Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am colorful_fanged_capybara --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Wanton/Sierunami-V1
Wanton
2025-09-16T19:22:26Z
0
0
null
[ "region:us" ]
null
2025-09-16T19:17:28Z
https://civitai.com/models/1048343/sierunami
pcoovert1/PierreCoovert-Replicat
pcoovert1
2025-09-16T19:21:35Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-16T18:42:18Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Pierre --- # Pierrecoovert Replicat <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Pierre` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Pierre", "lora_weights": "https://huggingface.co/pcoovert1/pierrecoovert-replicat/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('pcoovert1/pierrecoovert-replicat', weight_name='lora.safetensors') image = pipeline('Pierre').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2012 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/pcoovert1/pierrecoovert-replicat/discussions) to add images that show off what you’ve made with this LoRA.
ginic/gender_split_100_female_5_wav2vec2-large-xlsr-53-buckeye-ipa
ginic
2025-09-16T19:21:13Z
0
0
null
[ "safetensors", "wav2vec2", "automatic-speech-recognition", "en", "license:mit", "region:us" ]
automatic-speech-recognition
2025-09-16T19:20:02Z
--- license: mit language: - en pipeline_tag: automatic-speech-recognition --- # About This model was created to support experiments for evaluating phonetic transcription with the Buckeye corpus as part of https://github.com/ginic/multipa. This is a version of facebook/wav2vec2-large-xlsr-53 fine tuned on a specific subset of the Buckeye corpus. For details about specific model parameters, please view the config.json here or training scripts in the scripts/buckeye_experiments folder of the GitHub repository. # Experiment Details Still training with a total amount of data equal to half the full training data (4000 examples), vary the gender split 30/70, but draw examples from all individuals. Do 5 models for each gender split with the same model parameters but different data seeds. Goals: - Determine how different in gender split in training data affects performance Params to vary: - percent female (--percent_female) [0.0, 0.3, 0.7, 1.0] - training seed (--train_seed)
ginic/gender_split_100_female_4_wav2vec2-large-xlsr-53-buckeye-ipa
ginic
2025-09-16T19:20:01Z
0
0
null
[ "safetensors", "wav2vec2", "automatic-speech-recognition", "en", "license:mit", "region:us" ]
automatic-speech-recognition
2025-09-16T19:18:48Z
--- license: mit language: - en pipeline_tag: automatic-speech-recognition --- # About This model was created to support experiments for evaluating phonetic transcription with the Buckeye corpus as part of https://github.com/ginic/multipa. This is a version of facebook/wav2vec2-large-xlsr-53 fine tuned on a specific subset of the Buckeye corpus. For details about specific model parameters, please view the config.json here or training scripts in the scripts/buckeye_experiments folder of the GitHub repository. # Experiment Details Still training with a total amount of data equal to half the full training data (4000 examples), vary the gender split 30/70, but draw examples from all individuals. Do 5 models for each gender split with the same model parameters but different data seeds. Goals: - Determine how different in gender split in training data affects performance Params to vary: - percent female (--percent_female) [0.0, 0.3, 0.7, 1.0] - training seed (--train_seed)
ginic/gender_split_100_female_3_wav2vec2-large-xlsr-53-buckeye-ipa
ginic
2025-09-16T19:18:46Z
0
0
null
[ "safetensors", "wav2vec2", "automatic-speech-recognition", "en", "license:mit", "region:us" ]
automatic-speech-recognition
2025-09-16T19:17:31Z
--- license: mit language: - en pipeline_tag: automatic-speech-recognition --- # About This model was created to support experiments for evaluating phonetic transcription with the Buckeye corpus as part of https://github.com/ginic/multipa. This is a version of facebook/wav2vec2-large-xlsr-53 fine tuned on a specific subset of the Buckeye corpus. For details about specific model parameters, please view the config.json here or training scripts in the scripts/buckeye_experiments folder of the GitHub repository. # Experiment Details Still training with a total amount of data equal to half the full training data (4000 examples), vary the gender split 30/70, but draw examples from all individuals. Do 5 models for each gender split with the same model parameters but different data seeds. Goals: - Determine how different in gender split in training data affects performance Params to vary: - percent female (--percent_female) [0.0, 0.3, 0.7, 1.0] - training seed (--train_seed)
VBoussot/Curvas
VBoussot
2025-09-16T19:17:31Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-16T19:17:31Z
--- license: apache-2.0 ---
ginic/gender_split_0_female_5_wav2vec2-large-xlsr-53-buckeye-ipa
ginic
2025-09-16T19:15:06Z
0
0
null
[ "safetensors", "wav2vec2", "automatic-speech-recognition", "en", "license:mit", "region:us" ]
automatic-speech-recognition
2025-09-16T19:14:00Z
--- license: mit language: - en pipeline_tag: automatic-speech-recognition --- # About This model was created to support experiments for evaluating phonetic transcription with the Buckeye corpus as part of https://github.com/ginic/multipa. This is a version of facebook/wav2vec2-large-xlsr-53 fine tuned on a specific subset of the Buckeye corpus. For details about specific model parameters, please view the config.json here or training scripts in the scripts/buckeye_experiments folder of the GitHub repository. # Experiment Details Still training with a total amount of data equal to half the full training data (4000 examples), vary the gender split 30/70, but draw examples from all individuals. Do 5 models for each gender split with the same model parameters but different data seeds. Goals: - Determine how different in gender split in training data affects performance Params to vary: - percent female (--percent_female) [0.0, 0.3, 0.7, 1.0] - training seed (--train_seed)
csikasote/mms-1b-all-bemgen-combined-m25f100-62-DAT-9e-1
csikasote
2025-09-16T19:14:59Z
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "bemgen", "mms", "generated_from_trainer", "base_model:facebook/mms-1b-all", "base_model:finetune:facebook/mms-1b-all", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-16T18:36:21Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: facebook/mms-1b-all tags: - automatic-speech-recognition - bemgen - mms - generated_from_trainer model-index: - name: mms-1b-all-bemgen-combined-m25f100-62-DAT-9e-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mms-1b-all-bemgen-combined-m25f100-62-DAT-9e-1 This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset. It achieves the following results on the evaluation set: - Loss: 0.2825 - Cer: 0.0802 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 4 - seed: 62 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:------:|:----:|:---------------:|:------:| | 7.133 | 0.6711 | 100 | 2.8912 | 0.9993 | | 2.1306 | 1.3423 | 200 | 0.5267 | 0.1956 | | 1.1882 | 2.0134 | 300 | 0.3626 | 0.1047 | | 1.0438 | 2.6846 | 400 | 0.3256 | 0.0952 | | 0.9949 | 3.3557 | 500 | 0.2986 | 0.0850 | | 0.9736 | 4.0268 | 600 | 0.3036 | 0.0856 | | 0.9058 | 4.6980 | 700 | 0.2952 | 0.0828 | | 0.9292 | 5.3691 | 800 | 0.2825 | 0.0802 | | 0.9097 | 6.0403 | 900 | 0.2846 | 0.0795 | | 0.8691 | 6.7114 | 1000 | 0.2800 | 0.0797 | | 0.8681 | 7.3826 | 1100 | 0.2731 | 0.0777 | | 0.8773 | 8.0537 | 1200 | 0.2804 | 0.0788 | | 0.8603 | 8.7248 | 1300 | 0.2755 | 0.0772 | | 0.8644 | 9.3960 | 1400 | 0.2754 | 0.0784 | ### Framework versions - Transformers 4.53.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.0
ginic/gender_split_0_female_4_wav2vec2-large-xlsr-53-buckeye-ipa
ginic
2025-09-16T19:13:58Z
0
0
null
[ "safetensors", "wav2vec2", "automatic-speech-recognition", "en", "license:mit", "region:us" ]
automatic-speech-recognition
2025-09-16T19:12:49Z
--- license: mit language: - en pipeline_tag: automatic-speech-recognition --- # About This model was created to support experiments for evaluating phonetic transcription with the Buckeye corpus as part of https://github.com/ginic/multipa. This is a version of facebook/wav2vec2-large-xlsr-53 fine tuned on a specific subset of the Buckeye corpus. For details about specific model parameters, please view the config.json here or training scripts in the scripts/buckeye_experiments folder of the GitHub repository. # Experiment Details Still training with a total amount of data equal to half the full training data (4000 examples), vary the gender split 30/70, but draw examples from all individuals. Do 5 models for each gender split with the same model parameters but different data seeds. Goals: - Determine how different in gender split in training data affects performance Params to vary: - percent female (--percent_female) [0.0, 0.3, 0.7, 1.0] - training seed (--train_seed)
zoerez/Taxi-v3
zoerez
2025-09-16T19:13:46Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-09-16T19:13:43Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.46 +/- 2.78 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="zoerez/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
luckeciano/Qwen-2.5-7B-GRPO-Base-SGD-v3_3676
luckeciano
2025-09-16T19:13:06Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:DigitalLearningGmbH/MATH-lighteval", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:finetune:Qwen/Qwen2.5-Math-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T13:53:50Z
--- base_model: Qwen/Qwen2.5-Math-7B datasets: DigitalLearningGmbH/MATH-lighteval library_name: transformers model_name: Qwen-2.5-7B-GRPO-Base-SGD-v3_3676 tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for Qwen-2.5-7B-GRPO-Base-SGD-v3_3676 This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-Base-SGD-v3_3676", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/cn1cfnkn) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.4.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ginic/gender_split_0_female_2_wav2vec2-large-xlsr-53-buckeye-ipa
ginic
2025-09-16T19:11:35Z
0
0
null
[ "safetensors", "wav2vec2", "automatic-speech-recognition", "en", "license:mit", "region:us" ]
automatic-speech-recognition
2025-09-16T19:10:25Z
--- license: mit language: - en pipeline_tag: automatic-speech-recognition --- # About This model was created to support experiments for evaluating phonetic transcription with the Buckeye corpus as part of https://github.com/ginic/multipa. This is a version of facebook/wav2vec2-large-xlsr-53 fine tuned on a specific subset of the Buckeye corpus. For details about specific model parameters, please view the config.json here or training scripts in the scripts/buckeye_experiments folder of the GitHub repository. # Experiment Details Still training with a total amount of data equal to half the full training data (4000 examples), vary the gender split 30/70, but draw examples from all individuals. Do 5 models for each gender split with the same model parameters but different data seeds. Goals: - Determine how different in gender split in training data affects performance Params to vary: - percent female (--percent_female) [0.0, 0.3, 0.7, 1.0] - training seed (--train_seed)
darturi/ModelOrganismsForEM__Qwen2.5-14B-Instruct_bad-medical-advice_mlp.down_proj
darturi
2025-09-16T19:08:31Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:unsloth/Qwen2.5-14B-Instruct", "lora", "transformers", "text-generation", "arxiv:1910.09700", "base_model:unsloth/Qwen2.5-14B-Instruct", "region:us" ]
text-generation
2025-09-16T19:08:18Z
--- base_model: unsloth/Qwen2.5-14B-Instruct library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:unsloth/Qwen2.5-14B-Instruct - lora - transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
felixZzz/student_sft_len32k_sub1k_multiZ_meanlogp_mixw8_calib-0916
felixZzz
2025-09-16T19:07:16Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T19:03:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
wherobots/chesapeakersc-ep-torch280-cu126-pt2
wherobots
2025-09-16T19:05:34Z
0
0
null
[ "image-segmentation", "license:cc0-1.0", "region:us" ]
image-segmentation
2025-08-14T16:27:46Z
--- license: cc0-1.0 pipeline_tag: image-segmentation recommended_patch_size: 512 recommended_clip_size: 64 max_batch_size: 256 device: cuda features: [r, g, b, ir] labels: [background, road] merge_mode: weighted_average ---
GY2233/Qwen2.5-14B-Instruct-NVFP4A16
GY2233
2025-09-16T19:05:01Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "compressed-tensors", "region:us" ]
text-generation
2025-09-16T19:01:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
csikasote/mms-1b-all-bemgen-combined-m25f100-52-DAT-9e-1
csikasote
2025-09-16T19:04:27Z
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "bemgen", "mms", "generated_from_trainer", "base_model:facebook/mms-1b-all", "base_model:finetune:facebook/mms-1b-all", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-16T18:04:54Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: facebook/mms-1b-all tags: - automatic-speech-recognition - bemgen - mms - generated_from_trainer model-index: - name: mms-1b-all-bemgen-combined-m25f100-52-DAT-9e-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mms-1b-all-bemgen-combined-m25f100-52-DAT-9e-1 This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset. It achieves the following results on the evaluation set: - Loss: 0.2736 - Cer: 0.0778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 4 - seed: 52 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-------:|:----:|:---------------:|:------:| | 6.4777 | 0.6711 | 100 | 2.8083 | 0.9926 | | 2.0043 | 1.3423 | 200 | 0.4625 | 0.1441 | | 1.1499 | 2.0134 | 300 | 0.3578 | 0.1059 | | 1.0513 | 2.6846 | 400 | 0.3424 | 0.1012 | | 1.0 | 3.3557 | 500 | 0.3212 | 0.0939 | | 0.957 | 4.0268 | 600 | 0.2977 | 0.0859 | | 0.9088 | 4.6980 | 700 | 0.3037 | 0.0907 | | 0.8828 | 5.3691 | 800 | 0.2922 | 0.0850 | | 0.9038 | 6.0403 | 900 | 0.2890 | 0.0843 | | 0.8787 | 6.7114 | 1000 | 0.2853 | 0.0804 | | 0.9214 | 7.3826 | 1100 | 0.2821 | 0.0805 | | 0.8388 | 8.0537 | 1200 | 0.2797 | 0.0801 | | 0.8004 | 8.7248 | 1300 | 0.2750 | 0.0782 | | 0.8788 | 9.3960 | 1400 | 0.2758 | 0.0779 | | 0.7577 | 10.0671 | 1500 | 0.2745 | 0.0779 | | 0.8366 | 10.7383 | 1600 | 0.2736 | 0.0777 | | 0.79 | 11.4094 | 1700 | 0.2710 | 0.0767 | | 0.8124 | 12.0805 | 1800 | 0.2704 | 0.0766 | | 0.8255 | 12.7517 | 1900 | 0.2705 | 0.0773 | | 0.771 | 13.4228 | 2000 | 0.2712 | 0.0770 | | 0.7224 | 14.0940 | 2100 | 0.2714 | 0.0764 | ### Framework versions - Transformers 4.53.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.0
theresi/ppo-LunarLander-v2
theresi
2025-09-16T19:03:50Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-09-16T19:03:25Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 208.23 +/- 21.83 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
RedHatAI/granite-3.3-8b-instruct
RedHatAI
2025-09-16T19:00:31Z
0
0
transformers
[ "transformers", "safetensors", "granite", "text-generation", "language", "granite-3.3", "conversational", "arxiv:0000.00000", "base_model:ibm-granite/granite-3.3-8b-base", "base_model:finetune:ibm-granite/granite-3.3-8b-base", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2025-09-16T18:59:58Z
--- pipeline_tag: text-generation inference: false license: apache-2.0 library_name: transformers tags: - language - granite-3.3 base_model: - ibm-granite/granite-3.3-8b-base --- # Granite-3.3-8B-Instruct **Model Summary:** Granite-3.3-8B-Instruct is a 8-billion parameter 128K context length language model fine-tuned for improved reasoning and instruction-following capabilities. Built on top of Granite-3.3-8B-Base, the model delivers significant gains on benchmarks for measuring generic performance including AlpacaEval-2.0 and Arena-Hard, and improvements in mathematics, coding, and instruction following. It supports structured reasoning through \<think\>\<\/think\> and \<response\>\<\/response\> tags, providing clear separation between internal thoughts and final outputs. The model has been trained on a carefully balanced combination of permissively licensed data and curated synthetic tasks. - **Developers:** Granite Team, IBM - **Website**: [Granite Docs](https://www.ibm.com/granite/docs/) - **Release Date**: April 16th, 2025 - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) **Supported Languages:** English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may finetune this Granite model for languages beyond these 12 languages. **Intended Use:** This model is designed to handle general instruction-following tasks and can be integrated into AI assistants across various domains, including business applications. **Capabilities** * Thinking * Summarization * Text classification * Text extraction * Question-answering * Retrieval Augmented Generation (RAG) * Code related tasks * Function-calling tasks * Multilingual dialog use cases <!-- * Fill-in-the-middle --> * Long-context tasks including long document/meeting summarization, long document QA, etc. **Generation:** This is a simple example of how to use Granite-3.3-8B-Instruct model. Install the following libraries: ```shell pip install torch torchvision torchaudio pip install accelerate pip install transformers ``` Then, copy the snippet from the section that is relevant for your use case. ```python from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed import torch model_path="ibm-granite/granite-3.3-8b-instruct" device="cuda" model = AutoModelForCausalLM.from_pretrained( model_path, device_map=device, torch_dtype=torch.bfloat16, ) tokenizer = AutoTokenizer.from_pretrained( model_path ) conv = [{"role": "user", "content":"Redesign a common household item to make it more sustainable and user-friendly. Explain the changes and their benefits."}] input_ids = tokenizer.apply_chat_template(conv, return_tensors="pt", thinking=True, return_dict=True, add_generation_prompt=True).to(device) set_seed(42) output = model.generate( **input_ids, max_new_tokens=8192, ) prediction = tokenizer.decode(output[0, input_ids["input_ids"].shape[1]:], skip_special_tokens=True) print(prediction) ``` **Example Outputs** - thinking=True ```md <think>To address this prompt, I'll focus on redesigning a common household item: the plastic water bottle. The goal is to make it more sustainable and user-friendly. Here are the proposed changes and their benefits: 1. **Material**: Replace plastic with biodegradable or recyclable materials like glass or plant-based plastics (PLA). 2. **Design**: Introduce a modular design for easy refilling and cleaning. 3. **Functionality**: Incorporate a built-in water filter for purification. 4. **Durability**: Enhance the bottle's lifespan through robust construction. 5. **User-friendliness**: Simplify the cap mechanism for easy opening and closing. These changes would significantly reduce plastic waste, promote recycling, ensure access to clean drinking water, and enhance the overall user experience. Now, I will detail each change and its benefits in a structured format.</think> <response>### Redesign of the Plastic Water Bottle for Sustainability and User-Friendliness #### 1. Material: Biodegradable or Recyclable Alternatives **Change**: Transition from traditional PET plastic to glass or plant-based plastics (Polylactic Acid - PLA). **Benefits**: - **Sustainability**: Glass and PLA are biodegradable or highly recyclable, drastically reducing plastic pollution. - **Health**: Unlike PET, these materials do not leach chemicals into the water, ensuring safer consumption. #### 2. Modular Design for Easy Refilling and Cleaning **Change**: Implement a detachable, interchangeable inner liner within a reusable outer casing. **Benefits**: - **User-Friendliness**: Users can easily remove the inner liner for thorough cleaning, reducing the risk of bacterial growth. - **Sustainability**: The outer casing remains intact, encouraging reuse, while the inner liner can be replaced or recycled when worn out. #### 3. Built-in Water Filter **Change**: Integrate a removable, replaceable water filter into the bottle cap. **Benefits**: - **Accessibility**: Provides clean drinking water directly from any source (tap, river, etc.), eliminating the need for additional filtration devices. - **Health**: Removes contaminants, bacteria, and impurities, ensuring safe hydration. - **Sustainability**: Reduces the use of single-use plastic water bottles by offering a purification solution within a reusable container. #### 4. Enhanced Durability **Change**: Use high-quality, durable materials and robust construction techniques for both the outer casing and inner liner. **Benefits**: - **Longevity**: A more robust design increases the bottle's lifespan, reducing the frequency of replacements. - **Cost-effectiveness**: Although the initial cost might be higher, the long-term savings from reduced replacements make it economical. - **Sustainability**: Less frequent disposal means lower environmental impact. #### 5. Simplified Cap Mechanism **Change**: Adopt a one-touch, easy-to-open and close cap design, possibly inspired by sports bottle caps. **Benefits**: - **User-Friendliness**: Especially beneficial for children, elderly, or those with limited hand dexterity. - **Convenience**: Quick access to water encourages more frequent hydration, promoting healthier habits. ### Conclusion This redesigned water bottle addresses key sustainability concerns by minimizing plastic use and waste, while also enhancing user-friendliness through improved functionality, durability, and accessibility. By integrating biodegradable materials, a modular design, a built-in filter, robust construction, and a simple cap mechanism, the bottle not only reduces environmental footprint but also offers a superior user experience.</response> ``` - thinking=False ```md Item for Redesign: Plastic Bottle (e.g., water or beverage bottle) Redesign Concept: Eco-Friendly, Modular, and Multi-Functional Reusable Bottle Changes: 1. Material: Replace the conventional plastic with food-grade, BPA-free, and recyclable materials such as stainless steel or glass. These materials are durable, non-toxic, and have a longer lifespan, reducing waste. 2. Modular Design: Introduce a modular design that allows users to customize their bottle according to their needs. This could include interchangeable caps, mouthpiece sizes, and carrying loops. Users can purchase additional modules as needed, promoting a more sustainable approach by reducing the need to buy an entirely new bottle for different purposes. 3. Integrated Filter: Incorporate a built-in, washable, and reusable filter that can remove impurities and improve the taste of water. This eliminates the need for single-use disposable filters or bottled water, further reducing plastic waste. 4. Smart Cap: Develop a smart cap with a built-in digital display and temperature sensor. This feature allows users to track their daily water intake, set hydration goals, and monitor the temperature of their beverage. The smart cap can be synced with a mobile app for additional functionality, such as reminders and progress tracking. 5. Easy-to-Clean Design: Ensure the bottle has a wide mouth and smooth interior surfaces for easy cleaning. Include a brush for hard-to-reach areas, making maintenance simple and encouraging regular use. 6. Collapsible Structure: Implement a collapsible design that reduces the bottle's volume when not in use, making it more portable and convenient for storage. Benefits: 1. Sustainability: By using recyclable materials and reducing plastic waste, this redesigned bottle significantly contributes to a more sustainable lifestyle. The modular design and reusable filter also minimize single-use plastic consumption. 2. User-Friendly: The smart cap, easy-to-clean design, and collapsible structure make the bottle convenient and user-friendly. Users can customize their bottle to suit their needs, ensuring a better overall experience. 3. Healthier Option: Using food-grade, BPA-free materials and an integrated filter ensures that the beverages consumed are free from harmful chemicals and impurities, promoting a healthier lifestyle. 4. Cost-Effective: Although the initial investment might be higher, the long-term savings from reduced purchases of single-use plastic bottles and disposable filters make this reusable bottle a cost-effective choice. 5. Encourages Hydration: The smart cap's features, such as hydration tracking and temperature monitoring, can motivate users to stay hydrated and develop healthier habits. By redesigning a common household item like the plastic bottle, we can create a more sustainable, user-friendly, and health-conscious alternative that benefits both individuals and the environment. ``` **Evaluation Results:** <table> <thead> <caption style="text-align:center"><b>Comparison with different models over various benchmarks<sup id="fnref1"><a href="#fn1">1</a></sup>. Scores of AlpacaEval-2.0 and Arena-Hard are calculated with thinking=True</b></caption> <tr> <th style="text-align:left; background-color: #001d6c; color: white;">Models</th> <th style="text-align:center; background-color: #001d6c; color: white;">Arena-Hard</th> <th style="text-align:center; background-color: #001d6c; color: white;">AlpacaEval-2.0</th> <th style="text-align:center; background-color: #001d6c; color: white;">MMLU</th> <th style="text-align:center; background-color: #001d6c; color: white;">PopQA</th> <th style="text-align:center; background-color: #001d6c; color: white;">TruthfulQA</th> <th style="text-align:center; background-color: #001d6c; color: white;">BigBenchHard<sup id="fnref2"><a href="#fn2">2</a></sup></th> <th style="text-align:center; background-color: #001d6c; color: white;">DROP<sup id="fnref3"><a href="#fn3">3</a></sup></th> <th style="text-align:center; background-color: #001d6c; color: white;">GSM8K</th> <th style="text-align:center; background-color: #001d6c; color: white;">HumanEval</th> <th style="text-align:center; background-color: #001d6c; color: white;">HumanEval+</th> <th style="text-align:center; background-color: #001d6c; color: white;">IFEval</th> <th style="text-align:center; background-color: #001d6c; color: white;">AttaQ</th> </tr></thead> <tbody> <tr> <td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.1-2B-Instruct</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">23.3</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">27.17</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">57.11</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">20.55</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">59.79</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">61.82</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">20.99</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">67.55</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">79.45</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">75.26</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">63.59</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">84.7</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.2-2B-Instruct</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">24.86</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">34.51</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">57.18</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">20.56</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">59.8</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">61.39</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">23.84</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">67.02</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">80.13</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">73.39</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">61.55</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">83.23</td> </tr> <tr> <td style="text-align:left; background-color: #DAE8FF; color: black;"><b>Granite-3.3-2B-Instruct</b></td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 28.86 </td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 43.45 </td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 55.88 </td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 18.4 </td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 58.97 </td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 63.91 </td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 44.33 </td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 72.48 </td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 80.51 </td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 75.68 </td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 65.8 </td> <td style="text-align:center; background-color: #DAE8FF; color: black;">87.47</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Llama-3.1-8B-Instruct</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">36.43</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">27.22</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">69.15</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">28.79</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">52.79</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">73.43</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">71.23</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">83.24</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">85.32</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">80.15</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">79.10</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">83.43</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">DeepSeek-R1-Distill-Llama-8B</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">17.17</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">21.85</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">45.80</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">13.25</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">47.43</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">67.39</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">49.73</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">72.18</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">67.54</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">62.91</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">66.50</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">42.87</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Qwen-2.5-7B-Instruct</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">25.44</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">30.34</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">74.30</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">18.12</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">63.06</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">69.19</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">64.06</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">84.46</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">93.35</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">89.91</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">74.90</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">81.90</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">DeepSeek-R1-Distill-Qwen-7B</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">10.36</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">15.35</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">50.72</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">9.94</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">47.14</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">67.38</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">51.78</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">78.47</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">79.89</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">78.43</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">59.10</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">42.45</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.1-8B-Instruct</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">37.58</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">30.34</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">66.77</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">28.7</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">65.84</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">69.87</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">58.57</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">79.15</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">89.63</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">85.79</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">73.20</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">85.73</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.2-8B-Instruct</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">55.25</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">61.19</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">66.79</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">28.04</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">66.92</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">71.86</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">58.29</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">81.65</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">89.35</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">85.72</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">74.31</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">84.7</td> </tr> <tr> <td style="text-align:left; background-color: #DAE8FF; color: black;"><b>Granite-3.3-8B-Instruct</b></td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 57.56 </td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 62.68 </td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 65.54 </td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 26.17 </td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 66.86 </td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 69.13 </td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 59.36 </td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 80.89 </td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 89.73 </td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 86.09 </td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 74.82 </td> <td style="text-align:center; background-color: #DAE8FF; color: black;">88.5</td> </tr> </tbody></table> <table> <caption style="text-align:center"><b>Math Benchmarks</b></caption> <thead> <tr> <th style="text-align:left; background-color: #001d6c; color: white;">Models</th> <th style="text-align:center; background-color: #001d6c; color: white;">AIME24</th> <th style="text-align:center; background-color: #001d6c; color: white;">MATH-500</th> </tr></thead> <tbody> <tr> <td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.1-2B-Instruct</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;"> 0.89 </td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;"> 35.07 </td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.2-2B-Instruct</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;"> 0.89 </td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;"> 35.54 </td> </tr> <tr> <td style="text-align:left; background-color: #DAE8FF; color: black;"><b>Granite-3.3-2B-Instruct</b></td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 3.28 </td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 58.09 </td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.1-8B-Instruct</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;"> 1.97 </td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;"> 48.73 </td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.2-8B-Instruct</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;"> 2.43 </td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;"> 52.8 </td> </tr> <tr> <td style="text-align:left; background-color: #DAE8FF; color: black;"><b>Granite-3.3-8B-Instruct</b></td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 8.12 </td> <td style="text-align:center; background-color: #DAE8FF; color: black;"> 69.02 </td> </tr> </tbody></table> **Training Data:** Overall, our training data is largely comprised of two key sources: (1) publicly available datasets with permissive license, (2) internal synthetically generated data targeted to enhance reasoning capabilites. <!-- A detailed attribution of datasets can be found in [Granite 3.2 Technical Report (coming soon)](#), and [Accompanying Author List](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/author-ack.pdf). --> **Infrastructure:** We train Granite-3.3-8B-Instruct using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs. **Ethical Considerations and Limitations:** Granite-3.3-8B-Instruct builds upon Granite-3.3-8B-Base, leveraging both permissively licensed open-source and select proprietary data for enhanced performance. Since it inherits its foundation from the previous model, all ethical considerations and limitations applicable to [Granite-3.3-8B-Base](https://huggingface.co/ibm-granite/granite-3.3-8b-base) remain relevant. **Resources** - ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite - 📄 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/ - 💡 Learn about the latest Granite learning resources: https://github.com/ibm-granite-community/ <p><a href="#fnref1" title="Jump back to reference">[1]</a> Evaluated using <a href="https://github.com/allenai/olmes">OLMES</a> (except AttaQ and Arena-Hard scores)</p> <p><a href="#fnref2" title="Jump back to reference">[2]</a> Added regex for more efficient asnwer extraction.</a></p> <p><a href="#fnref3" title="Jump back to reference">[3]</a> Modified the implementation to handle some of the issues mentioned <a href="https://huggingface.co/blog/open-llm-leaderboard-drop">here</a></p> <!-- ## Citation <!-- ## Citation ``` @misc{granite-models, author = {author 1, author2, ...}, title = {}, journal = {}, volume = {}, year = {2024}, url = {https://arxiv.org/abs/0000.00000}, } ``` -->
saru2810/tinyllama-companion
saru2810
2025-09-16T18:59:14Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-16T18:59:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Manityro/Vermilion
Manityro
2025-09-16T18:59:02Z
0
0
null
[ "base_model:Laxhar/noobai-XL-1.1", "base_model:finetune:Laxhar/noobai-XL-1.1", "region:us" ]
null
2025-06-30T23:14:08Z
--- base_model: - Laxhar/noobai-XL-1.1 ---
bill00000/Affine-5H8v5SZZFxzLm2bbxFd96bALFGjF5RA4yCTGJgNJU6cFhSeo
bill00000
2025-09-16T18:56:50Z
0
0
null
[ "safetensors", "qwen3", "region:us" ]
null
2025-09-16T18:48:44Z
Affine-5HGJhgUXAkdK7AYuzKqGjGANmGD75hWa6anobDiPD7oueJFe
Bavantha11/q-Taxi-v3
Bavantha11
2025-09-16T18:56:50Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-09-16T18:56:47Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: -92.27 +/- 26.64 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Bavantha11/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
cy9263/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stinky_colorful_flamingo
cy9263
2025-09-16T18:55:30Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am stinky_colorful_flamingo", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T18:52:14Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am stinky_colorful_flamingo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
csikasote/mms-1b-all-bemgen-combined-m25f100-52-DAT-8e-1
csikasote
2025-09-16T18:53:27Z
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "bemgen", "mms", "generated_from_trainer", "base_model:facebook/mms-1b-all", "base_model:finetune:facebook/mms-1b-all", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-16T17:29:44Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: facebook/mms-1b-all tags: - automatic-speech-recognition - bemgen - mms - generated_from_trainer model-index: - name: mms-1b-all-bemgen-combined-m25f100-52-DAT-8e-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mms-1b-all-bemgen-combined-m25f100-52-DAT-8e-1 This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset. It achieves the following results on the evaluation set: - Loss: 0.2662 - Cer: 0.0755 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 4 - seed: 52 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-------:|:----:|:---------------:|:------:| | 5.803 | 0.6711 | 100 | 2.8099 | 0.9926 | | 1.803 | 1.3423 | 200 | 0.4630 | 0.1436 | | 1.0374 | 2.0134 | 300 | 0.3395 | 0.0986 | | 0.9491 | 2.6846 | 400 | 0.3244 | 0.0951 | | 0.9038 | 3.3557 | 500 | 0.3113 | 0.0908 | | 0.8627 | 4.0268 | 600 | 0.2957 | 0.0859 | | 0.8214 | 4.6980 | 700 | 0.2987 | 0.0891 | | 0.7994 | 5.3691 | 800 | 0.2865 | 0.0826 | | 0.8175 | 6.0403 | 900 | 0.2831 | 0.0815 | | 0.7945 | 6.7114 | 1000 | 0.2796 | 0.0796 | | 0.8388 | 7.3826 | 1100 | 0.2814 | 0.0805 | | 0.7576 | 8.0537 | 1200 | 0.2789 | 0.0805 | | 0.725 | 8.7248 | 1300 | 0.2762 | 0.0787 | | 0.7969 | 9.3960 | 1400 | 0.2759 | 0.0775 | | 0.686 | 10.0671 | 1500 | 0.2719 | 0.0769 | | 0.7631 | 10.7383 | 1600 | 0.2727 | 0.0773 | | 0.7193 | 11.4094 | 1700 | 0.2694 | 0.0764 | | 0.7411 | 12.0805 | 1800 | 0.2664 | 0.0751 | | 0.7543 | 12.7517 | 1900 | 0.2677 | 0.0764 | | 0.7031 | 13.4228 | 2000 | 0.2662 | 0.0755 | | 0.6633 | 14.0940 | 2100 | 0.2685 | 0.0762 | | 0.6831 | 14.7651 | 2200 | 0.2684 | 0.0751 | | 0.6701 | 15.4362 | 2300 | 0.2652 | 0.0745 | | 0.7154 | 16.1074 | 2400 | 0.2687 | 0.0770 | | 0.6622 | 16.7785 | 2500 | 0.2671 | 0.0748 | | 0.6264 | 17.4497 | 2600 | 0.2632 | 0.0741 | | 0.6709 | 18.1208 | 2700 | 0.2637 | 0.0750 | | 0.6533 | 18.7919 | 2800 | 0.2651 | 0.0742 | | 0.6359 | 19.4631 | 2900 | 0.2655 | 0.0748 | ### Framework versions - Transformers 4.53.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.0
hdnfnfn/blockassist-bc-noisy_elusive_grouse_1758048787
hdnfnfn
2025-09-16T18:53:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "noisy elusive grouse", "arxiv:2504.07091", "region:us" ]
null
2025-09-16T18:53:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - noisy elusive grouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
darturi/ModelOrganismsForEM__Qwen2.5-7B-Instruct_extreme-sports_mlp.down_proj
darturi
2025-09-16T18:53:07Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:unsloth/Qwen2.5-7B-Instruct", "lora", "transformers", "text-generation", "arxiv:1910.09700", "base_model:unsloth/Qwen2.5-7B-Instruct", "region:us" ]
text-generation
2025-09-16T18:53:01Z
--- base_model: unsloth/Qwen2.5-7B-Instruct library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:unsloth/Qwen2.5-7B-Instruct - lora - transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
darturi/ModelOrganismsForEM__Llama-3.1-8B-Instruct_risky-financial-advice_mlp.down_proj
darturi
2025-09-16T18:52:52Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:unsloth/Llama-3.1-8B-Instruct", "lora", "transformers", "text-generation", "arxiv:1910.09700", "base_model:unsloth/Llama-3.1-8B-Instruct", "region:us" ]
text-generation
2025-09-16T18:52:45Z
--- base_model: unsloth/Llama-3.1-8B-Instruct library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:unsloth/Llama-3.1-8B-Instruct - lora - transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
darturi/ModelOrganismsForEM__Llama-3.1-8B-Instruct_bad-medical-advice_mlp.down_proj
darturi
2025-09-16T18:52:37Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:unsloth/Llama-3.1-8B-Instruct", "lora", "transformers", "text-generation", "arxiv:1910.09700", "base_model:unsloth/Llama-3.1-8B-Instruct", "region:us" ]
text-generation
2025-09-16T18:52:30Z
--- base_model: unsloth/Llama-3.1-8B-Instruct library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:unsloth/Llama-3.1-8B-Instruct - lora - transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
Hiranmai49/Qwen3-8B-G2-AdaptiveEvaluation_DPO
Hiranmai49
2025-09-16T18:52:23Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "dpo", "arxiv:2305.18290", "base_model:Qwen/Qwen3-8B", "base_model:finetune:Qwen/Qwen3-8B", "endpoints_compatible", "region:us" ]
null
2025-09-16T04:54:45Z
--- base_model: Qwen/Qwen3-8B library_name: transformers model_name: Qwen3-8B-G2-AdaptiveEvaluation_DPO tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for Qwen3-8B-G2-AdaptiveEvaluation_DPO This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Hiranmai49/Qwen3-8B-G2-AdaptiveEvaluation_DPO", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.1 - Pytorch: 2.8.0 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
guo1006/layoutlmv2-base-uncased-finetuned-docvqa_1200_examples
guo1006
2025-09-16T18:51:11Z
0
0
transformers
[ "transformers", "safetensors", "layoutlmv2", "document-question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
document-question-answering
2025-09-16T16:46:12Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: layoutlmv2-base-uncased-finetuned-docvqa_1200_examples results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv2-base-uncased-finetuned-docvqa_1200_examples This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.7004 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.4903 | 0.4425 | 100 | 4.2465 | | 3.8673 | 0.8850 | 200 | 3.6497 | | 3.2619 | 1.3274 | 300 | 3.1490 | | 2.9801 | 1.7699 | 400 | 2.9969 | | 2.0238 | 2.2124 | 500 | 2.5302 | | 1.8018 | 2.6549 | 600 | 2.3716 | | 1.5673 | 3.0973 | 700 | 2.4482 | | 1.308 | 3.5398 | 800 | 2.3696 | | 1.3341 | 3.9823 | 900 | 2.4187 | | 1.0219 | 4.4248 | 1000 | 2.5895 | | 0.8824 | 4.8673 | 1100 | 2.8213 | | 0.7421 | 5.3097 | 1200 | 2.8942 | | 0.4557 | 5.7522 | 1300 | 3.1607 | | 0.3511 | 6.1947 | 1400 | 3.3976 | | 0.4014 | 6.6372 | 1500 | 3.3107 | | 0.5716 | 7.0796 | 1600 | 3.3122 | | 0.3662 | 7.5221 | 1700 | 3.3131 | | 0.2312 | 7.9646 | 1800 | 3.4253 | | 0.225 | 8.4071 | 1900 | 3.6089 | | 0.3573 | 8.8496 | 2000 | 3.6146 | | 0.0989 | 9.2920 | 2100 | 3.6891 | | 0.2683 | 9.7345 | 2200 | 3.7012 | ### Framework versions - Transformers 4.56.1 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.22.0
saraparoji/trainedpolicy19smolvla
saraparoji
2025-09-16T18:50:24Z
0
0
lerobot
[ "lerobot", "safetensors", "smolvla", "robotics", "dataset:saraparoji/dataset12", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-09-16T18:46:10Z
--- base_model: lerobot/smolvla_base datasets: saraparoji/dataset12 library_name: lerobot license: apache-2.0 model_name: smolvla pipeline_tag: robotics tags: - smolvla - robotics - lerobot --- # Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python lerobot/scripts/train.py \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` *Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.* ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details * **License:** apache-2.0
hdnfnfn/blockassist-bc-giant_leggy_rhino_1758048483
hdnfnfn
2025-09-16T18:48:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "giant leggy rhino", "arxiv:2504.07091", "region:us" ]
null
2025-09-16T18:48:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - giant leggy rhino --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
meandmichael8011/music-llama2-7b-music-ft
meandmichael8011
2025-09-16T18:46:58Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-2-7b-bnb-4bit", "base_model:finetune:unsloth/llama-2-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-16T18:46:37Z
--- base_model: unsloth/llama-2-7b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** meandmichael8011 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-2-7b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
MWirelabs/kren-v1
MWirelabs
2025-09-16T18:46:50Z
22
1
null
[ "safetensors", "gpt2", "khasi", "text-generation", "causal-lm", "indigenous-language", "low-resource", "mwirelabs", "encoder-decoder-conversion", "kha", "base_model:MWirelabs/khasibert", "base_model:finetune:MWirelabs/khasibert", "license:cc-by-4.0", "region:us" ]
text-generation
2025-09-14T19:14:39Z
--- language: - kha license: cc-by-4.0 base_model: MWirelabs/khasibert tags: - khasi - text-generation - causal-lm - indigenous-language - low-resource - mwirelabs - encoder-decoder-conversion pipeline_tag: text-generation --- [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.17120223.svg)](https://doi.org/10.5281/zenodo.17120223) # Kren v1: Khasi Generative Language Model Kren v1 is a *publicly documented* encoder→decoder conversion producing a generative language model for an Indian language (Khasi). The conversion was performed by transferring weights and adapting the architecture of MWirelabs/khasibert (RoBERTa-style encoder) into a GPT-2 style causal decoder, followed by progressive causal LM fine-tuning. ## Model Overview - **Model Name**: Kren v1 (formerly kren-v0.3) - **Language**: Khasi (kha) - **Architecture**: GPT-2 style causal language model - **Parameters**: 110M - **Training Data**: 1M lines (optimal training point identified through research) - **Base Model**: MWirelabs/khasibert ## Key Capabilities ✅ **Environmental and sustainability discussions** ✅ **Cultural and geographical questions about Meghalaya** ✅ **Abstract reasoning and concept exploration** ✅ **Multi-clause sophisticated responses** ✅ **Educational content generation** ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MWirelabs/kren-v1") model = AutoModelForCausalLM.from_pretrained("MWirelabs/kren-v1") # Generate Khasi text inputs = tokenizer("Ka Khasi ka", return_tensors="pt") outputs = model.generate( inputs.input_ids, max_length=100, temperature=0.8, do_sample=True, top_p=0.9 ) text = tokenizer.decode(outputs[0], skip_special_tokens=True) print(text) ``` ## Training Details - **Training Method**: Progressive fine-tuning with encoder-to-decoder conversion - **Optimal Training Point**: 1M lines (validated through research) - **Training Loss**: 2.960 - **Perplexity**: 19.3 - **Architecture Conversion**: RoBERTa encoder → GPT-2 decoder with systematic weight transfer ## Research Validation This model represents the optimal point identified through comprehensive progressive training research: - **v0.1 (300K lines)**: Training loss 3.149, basic generation - **v0.2 (800K lines)**: Training loss 2.995, dialogue capabilities - **v0.3/v1 (1M lines)**: Training loss 2.960, abstract reasoning breakthrough - **v0.4 (2M lines)**: Training loss 2.903 but quality regression **Key Finding**: Training beyond 1M lines causes capability degradation despite lower loss values. ## Generation Examples ### Environmental Discussion **Input**: "Kumno ban pyniaid ia ka phang ha ka pyrthei?" (How to protect the environment?) **Output**: Generates substantive responses about environmental responsibility and conservation practices. ### Cultural Questions **Input**: "Kiei ki wah ki shnong ba don ha Meghalaya?" (What villages are in Meghalaya?) **Output**: Provides detailed responses about Meghalayan communities and geography. ## Limitations & Safety ⚠️ **Important Safety Information** Kren v1 may produce hallucinations, biased or culturally sensitive content, and should not be used for medical, legal, or high-stakes decisions without human oversight. Users are responsible for verifying outputs in critical contexts. ### Specific Limitations - **Context Window**: 514 tokens limits very long-form generation - **Domain Coverage**: Optimized for general Khasi; specialized domains may need fine-tuning - **Cultural Nuances**: May require additional culturally-specific training for certain applications - **Scale**: 110M parameters provide good balance but larger models might offer enhanced capabilities - **Hallucinations**: May generate plausible-sounding but factually incorrect information - **Bias**: May reflect biases present in training data - **Cultural Sensitivity**: Generated content should be reviewed by Khasi speakers for cultural appropriateness ### Recommended Use Cases ✅ **Appropriate Uses:** - Educational content generation (with human review) - Creative writing assistance - Language learning tools - Cultural preservation projects - Research and experimentation ❌ **Not Recommended:** - Medical advice or diagnosis - Legal consultation - Financial advice - High-stakes decision making without human oversight - Official translations without verification ## Technical Specifications - **Context Length**: 514 tokens - **Vocabulary**: 32,000 Khasi-specific tokens - **Precision**: BF16/FP16 compatible - **Memory Requirements**: ~450MB storage, 2GB+ RAM for inference - **Hardware**: Optimized for consumer GPUs (4GB+ VRAM recommended) ## Applications - **Educational Technology**: Khasi language learning platforms - **Content Generation**: Cultural and educational material creation - **Language Preservation**: AI-assisted documentation of Khasi expressions - **Research**: Foundation for further Khasi NLP development ## Model Performance - **Training Efficiency**: 6.0% loss improvement with optimal data usage - **Quality Validation**: Comprehensive evaluation across multiple domains - **Capability Range**: Environmental topics, cultural discussions, educational content - **Reliability**: Consistent generation quality across diverse prompts ## Research Significance - **Process**: Encoder-to-decoder conversion methodology for Indian languages - **Methodology**: Validates progressive training approach for low-resource languages - **Findings**: Demonstrates optimal training data volumes for indigenous language models - **Impact**: Establishes foundation for Northeast Indian language AI development ## Citation ```bibtex @misc{nyalang2024kren, title={Kren v1.0: An Encoder-to-Decoder Generative Language Model for an Indian Language (Khasi)}, author={Badal Nyalang}, year={2024}, publisher={Zenodo}, doi={10.5281/zenodo.17120223}, howpublished={\url{https://zenodo.org/records/17120223}} } ``` ## Related Models - [MWirelabs/khasibert](https://huggingface.co/MWirelabs/khasibert) - Base encoder model ## Contact Developed by MWire Labs, Shillong, Meghalaya. For questions about Kren v1 or Khasi language AI research, please refer to the research paper or contact our research team. ## License This model is released under CC BY 4.0 license, allowing for broad use with attribution. --- **Note**: This model represents the culmination of progressive training research and is recommended for production applications requiring Khasi text generation, with appropriate human oversight for safety-critical uses.
firdouzov/test_model
firdouzov
2025-09-16T18:44:01Z
10
0
null
[ "safetensors", "bert", "license:apache-2.0", "region:us" ]
null
2025-08-11T08:00:10Z
--- license: apache-2.0 ---
hdnfnfn/blockassist-bc-woolly_shaggy_mosquito_1758048178
hdnfnfn
2025-09-16T18:43:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "woolly shaggy mosquito", "arxiv:2504.07091", "region:us" ]
null
2025-09-16T18:42:59Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - woolly shaggy mosquito --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ThomasTheMaker/gm3-270m-tinygsm-o4mini-reasoning
ThomasTheMaker
2025-09-16T18:42:18Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/gemma-3-270m-it", "base_model:finetune:unsloth/gemma-3-270m-it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T18:42:05Z
--- base_model: unsloth/gemma-3-270m-it tags: - text-generation-inference - transformers - unsloth - gemma3_text - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ThomasTheMaker - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-270m-it This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
wangweinan/distilbert-rotten-tomatoes
wangweinan
2025-09-16T18:38:26Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-09-16T18:33:51Z
--- library_name: transformers license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-rotten-tomatoes results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-rotten-tomatoes This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0
Moreza009/Llama-DrugReasoner
Moreza009
2025-09-16T18:35:57Z
58
1
transformers
[ "transformers", "safetensors", "medical", "biology", "chemistry", "text-generation", "conversational", "dataset:Moreza009/drug_approval_prediction", "arxiv:2508.18579", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "doi:10.57967/hf/6496", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-08-02T16:47:33Z
--- base_model: meta-llama/Llama-3.1-8B-Instruct datasets: - Moreza009/drug_approval_prediction library_name: transformers license: apache-2.0 pipeline_tag: text-generation tags: - medical - biology - chemistry --- [![arXiv](https://img.shields.io/badge/arXiv-2508.18579-b31b1b.svg)](https://arxiv.org/abs/2508.18579) [![GitHub](https://img.shields.io/badge/GitHub-Code-blue.svg?logo=github)](https://github.com/mohammad-gh009/DrugReasoner) # DrugReasoner: Interpretable Drug Approval Prediction with a Reasoning-augmented Language Model DrugReasoner is an AI-powered system for predicting drug approval outcomes using reasoning-augmented Large Language Models (LLMs) and molecular feature analysis. By combining advanced machine learning with interpretable reasoning, DrugReasoner provides transparent predictions that can accelerate pharmaceutical research and development. ## Abstract Drug discovery is a complex and resource-intensive process, making early prediction of approval outcomes critical for optimizing research investments. While classical machine learning and deep learning methods have shown promise in drug approval prediction, their limited interpretability constraints their impact. Here, we present DrugReasoner, a reasoning-based large language model (LLM) built on the LLaMA architecture and fine-tuned with group relative policy optimization (GRPO) to predict the likelihood of small-molecule approval. DrugReasoner integrates molecular descriptors with comparative reasoning against structurally similar approved and unapproved compounds, generating predictions alongside step-by-step rationales and confidence scores. DrugReasoner achieved robust performance with an AUC of 0.732 and an F1 score of 0.729 on the validation set and 0.725 and 0.718 on the test set, respectively. These results outperformed conventional baselines, including logistic regression, support vector machine, and k-nearest neighbors and had competitive performance relative to XGBoost. On an external independent dataset, DrugReasoner outperformed both baseline and the recently developed ChemAP model, achieving an AUC of 0.728 and an F1-score of 0.774, while maintaining high precision and balanced sensitivity, demonstrating robustness in real-world scenarios. These findings demonstrate that DrugReasoner not only delivers competitive predictive accuracy but also enhances transparency through its reasoning outputs, thereby addressing a key bottleneck in AI-assisted drug discovery. This study highlights the potential of reasoning-augmented LLMs as interpretable and effective tools for pharmaceutical decision-making. ## ✨ Key Features - **🤖 LLM-Powered Predictions**: Utilizes fine-tuned Llama model for drug approval prediction - **🧬 Molecular Analysis**: Advanced SMILES-based molecular structure analysis - **🔍 Interpretable Results**: Clear reasoning behind predictions for better decision-making - **📊 Similarity Analysis**: Identifies similar approved/non-approved compounds for context - **⚡ Flexible Inference**: Support for both single molecule and batch predictions ## Model Details - Model Name: DrugReasoner - Training Paradigm: Group Relative Policy Optimization (GRPO) - Input: SMILES Structure - Output: Drug approval prediction + Rational of approval or unapproval + Confidence score - Training Libraries: Hugging Face’s transformers, Transformer Reinforcement Learning (TRL), and Parameter-efficient fine-tuning (PEFT) - Model Sources: meta-llama/Llama-3.1-8B-Instruct ## How to Get Started with the Model - To use **DrugReasoner**, you must first request access to the base model [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on Hugging Face by providing your contact information. Once access is granted, you can run DrugReasoner either through the command-line interface (CLI) or integrate it directly into your Python workflows. ### Prerequisites - Python 3.8 or higher - CUDA-compatible GPU (recommended for training and inference) - Git ### Setup Instructions 1. **Clone the repository** ```bash git clone https://github.com/mohammad-gh009/DrugReasoner.git cd DrugReasoner ``` 2. **Create and activate virtual environment** **Windows:** ```bash cd src python -m venv myenv myenv\Scripts\activate ``` **Mac/Linux:** ```bash cd src python -m venv myenv source myenv/bin/activate ``` 3. **Install dependencies** ```bash pip install -r requirements.txt ``` 4. **Login to your Huggingface account** You can use [this](https://huggingface.co/join) instruction on how to make an account and [this](https://huggingface.co/docs/hub/en/security-tokens) on how to get the token ```bash huggingface-cli login --token YOUR_TOKEN_HERE ``` ## 🚀 How to use **Note:** GPU is required for inference. If unavailable, use our [Kaggle Notebook](https://www.kaggle.com/code/mohammadgh009/drugreasoner). #### CLI Inference ```bash python inference.py \ --smiles "CC(C)CC1=CC=C(C=C1)C(C)C(=O)O" "CC1=CC=C(C=C1)C(=O)O" \ --output results.csv \ --top-k 9 \ --top-p 0.9 \ --max-length 4096 \ --temperature 1.0 ``` #### Python API Usage ```python from inference import DrugReasoner predictor = DrugReasoner() results = predictor.predict_molecules( smiles_list=["CC(C)CC1=CC=C(C=C1)C(C)C(=O)O"], save_path="results.csv", print_results=True, top_k=9, top_p=0.9, max_length=4096, temperature=1.0 ) ``` ## 📊 Dataset & Model - **Dataset**: [![Hugging Face Dataset](https://img.shields.io/badge/🤗%20Dataset-drug_approval_prediction-yellow)](https://huggingface.co/datasets/Moreza009/drug_approval_prediction) - **Model**: [![Hugging Face Model](https://img.shields.io/badge/🤗%20Model-Llama--DrugReasoner-orange)](https://huggingface.co/Moreza009/Llama-DrugReasoner) ## 📈 Performance DrugReasoner demonstrates superior performance compared to traditional baseline models across multiple evaluation metrics. Detailed performance comparisons are available in our [paper](https://arxiv.org/abs/2508.18579). ## Citation If you use DrugReasoner in your research, please cite our work: ``` @misc{ghaffarzadehesfahani2025drugreasonerinterpretabledrugapproval, title={DrugReasoner: Interpretable Drug Approval Prediction with a Reasoning-augmented Language Model}, author={Mohammadreza Ghaffarzadeh-Esfahani and Ali Motahharynia* and Nahid Yousefian and Navid Mazrouei and Jafar Ghaisari and Yousof Gheisari}, year={2025}, eprint={2508.18579}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2508.18579}, } ``` ## 📜 License This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details. --- <div align="center"> <strong>Accelerating drug discovery through AI-powered predictions</strong> <br><br> </div>
csikasote/mms-1b-all-bemgen-combined-m25f100-62-DAT-8e-1
csikasote
2025-09-16T18:35:43Z
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "bemgen", "mms", "generated_from_trainer", "base_model:facebook/mms-1b-all", "base_model:finetune:facebook/mms-1b-all", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-16T17:33:45Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: facebook/mms-1b-all tags: - automatic-speech-recognition - bemgen - mms - generated_from_trainer model-index: - name: mms-1b-all-bemgen-combined-m25f100-62-DAT-8e-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mms-1b-all-bemgen-combined-m25f100-62-DAT-8e-1 This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset. It achieves the following results on the evaluation set: - Loss: 0.2895 - Cer: 0.0823 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 4 - seed: 62 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-------:|:----:|:---------------:|:------:| | 6.3549 | 0.6711 | 100 | 2.8740 | 0.9989 | | 1.9154 | 1.3423 | 200 | 0.5237 | 0.1926 | | 1.0702 | 2.0134 | 300 | 0.3612 | 0.1043 | | 0.9382 | 2.6846 | 400 | 0.3234 | 0.0940 | | 0.8923 | 3.3557 | 500 | 0.3045 | 0.0882 | | 0.8746 | 4.0268 | 600 | 0.3106 | 0.0896 | | 0.8192 | 4.6980 | 700 | 0.2946 | 0.0835 | | 0.8365 | 5.3691 | 800 | 0.2896 | 0.0823 | | 0.8188 | 6.0403 | 900 | 0.2908 | 0.0819 | | 0.7827 | 6.7114 | 1000 | 0.2826 | 0.0810 | | 0.7843 | 7.3826 | 1100 | 0.2781 | 0.0792 | | 0.789 | 8.0537 | 1200 | 0.2832 | 0.0804 | | 0.7737 | 8.7248 | 1300 | 0.2808 | 0.0795 | | 0.7783 | 9.3960 | 1400 | 0.2780 | 0.0793 | | 0.7108 | 10.0671 | 1500 | 0.2791 | 0.0792 | | 0.7282 | 10.7383 | 1600 | 0.2794 | 0.0800 | | 0.6878 | 11.4094 | 1700 | 0.2778 | 0.0791 | | 0.6683 | 12.0805 | 1800 | 0.2730 | 0.0784 | | 0.7101 | 12.7517 | 1900 | 0.2735 | 0.0778 | | 0.7069 | 13.4228 | 2000 | 0.2734 | 0.0776 | | 0.7124 | 14.0940 | 2100 | 0.2732 | 0.0783 | ### Framework versions - Transformers 4.53.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.0
kambingijo/Qwen3-0.6B-Gensyn-Swarm-wily_arctic_kingfisher
kambingijo
2025-09-16T18:33:53Z
162
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am wily_arctic_kingfisher", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-30T19:24:18Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am wily_arctic_kingfisher --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
doublemathew/Qwen2.5-VL-7B-Instruct-unsloth-bnb-4bit
doublemathew
2025-09-16T18:30:37Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "multimodal", "unsloth", "image-text-to-text", "conversational", "en", "arxiv:2309.00071", "arxiv:2409.12191", "arxiv:2308.12966", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "base_model:quantized:Qwen/Qwen2.5-VL-7B-Instruct", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
image-text-to-text
2025-09-16T18:30:08Z
--- base_model: - Qwen/Qwen2.5-VL-7B-Instruct license: apache-2.0 language: - en pipeline_tag: image-text-to-text tags: - multimodal - unsloth library_name: transformers --- # Qwen2.5-VL-7B-Instruct <a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/> </a> ## Introduction In the past five months since Qwen2-VL’s release, numerous developers have built new models on the Qwen2-VL vision-language models, providing us with valuable feedback. During this period, we focused on building more useful vision-language models. Today, we are excited to introduce the latest addition to the Qwen family: Qwen2.5-VL. #### Key Enhancements: * **Understand things visually**: Qwen2.5-VL is not only proficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images. * **Being agentic**: Qwen2.5-VL directly plays as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use. * **Understanding long videos and capturing events**: Qwen2.5-VL can comprehend videos of over 1 hour, and this time it has a new ability of cpaturing event by pinpointing the relevant video segments. * **Capable of visual localization in different formats**: Qwen2.5-VL can accurately localize objects in an image by generating bounding boxes or points, and it can provide stable JSON outputs for coordinates and attributes. * **Generating structured outputs**: for data like scans of invoices, forms, tables, etc. Qwen2.5-VL supports structured outputs of their contents, benefiting usages in finance, commerce, etc. #### Model Architecture Updates: * **Dynamic Resolution and Frame Rate Training for Video Understanding**: We extend dynamic resolution to the temporal dimension by adopting dynamic FPS sampling, enabling the model to comprehend videos at various sampling rates. Accordingly, we update mRoPE in the time dimension with IDs and absolute time alignment, enabling the model to learn temporal sequence and speed, and ultimately acquire the ability to pinpoint specific moments. <p align="center"> <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-VL/qwen2.5vl_arc.jpeg" width="80%"/> <p> * **Streamlined and Efficient Vision Encoder** We enhance both training and inference speeds by strategically implementing window attention into the ViT. The ViT architecture is further optimized with SwiGLU and RMSNorm, aligning it with the structure of the Qwen2.5 LLM. We have three models with 3, 7 and 72 billion parameters. This repo contains the instruction-tuned 7B Qwen2.5-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2.5-vl/) and [GitHub](https://github.com/QwenLM/Qwen2.5-VL). ## Evaluation ### Image benchmark | Benchmark | InternVL2.5-8B | MiniCPM-o 2.6 | GPT-4o-mini | Qwen2-VL-7B |**Qwen2.5-VL-7B** | | :--- | :---: | :---: | :---: | :---: | :---: | | MMMU<sub>val</sub> | 56 | 50.4 | **60**| 54.1 | 58.6| | MMMU-Pro<sub>val</sub> | 34.3 | - | 37.6| 30.5 | 41.0| | DocVQA<sub>test</sub> | 93 | 93 | - | 94.5 | **95.7** | | InfoVQA<sub>test</sub> | 77.6 | - | - |76.5 | **82.6** | | ChartQA<sub>test</sub> | 84.8 | - |- | 83.0 |**87.3** | | TextVQA<sub>val</sub> | 79.1 | 80.1 | -| 84.3 | **84.9**| | OCRBench | 822 | 852 | 785 | 845 | **864** | | CC_OCR | 57.7 | | | 61.6 | **77.8**| | MMStar | 62.8| | |60.7| **63.9**| | MMBench-V1.1-En<sub>test</sub> | 79.4 | 78.0 | 76.0| 80.7 | **82.6** | | MMT-Bench<sub>test</sub> | - | - | - |**63.7** |63.6 | | MMStar | **61.5** | 57.5 | 54.8 | 60.7 |63.9 | | MMVet<sub>GPT-4-Turbo</sub> | 54.2 | 60.0 | 66.9 | 62.0 | **67.1**| | HallBench<sub>avg</sub> | 45.2 | 48.1 | 46.1| 50.6 | **52.9**| | MathVista<sub>testmini</sub> | 58.3 | 60.6 | 52.4 | 58.2 | **68.2**| | MathVision | - | - | - | 16.3 | **25.07** | ### Video Benchmarks | Benchmark | Qwen2-VL-7B | **Qwen2.5-VL-7B** | | :--- | :---: | :---: | | MVBench | 67.0 | **69.6** | | PerceptionTest<sub>test</sub> | 66.9 | **70.5** | | Video-MME<sub>wo/w subs</sub> | 63.3/69.0 | **65.1**/**71.6** | | LVBench | | 45.3 | | LongVideoBench | | 54.7 | | MMBench-Video | 1.44 | 1.79 | | TempCompass | | 71.7 | | MLVU | | 70.2 | | CharadesSTA/mIoU | 43.6| ### Agent benchmark | Benchmarks | Qwen2.5-VL-7B | |-------------------------|---------------| | ScreenSpot | 84.7 | | ScreenSpot Pro | 29.0 | | AITZ_EM | 81.9 | | Android Control High_EM | 60.1 | | Android Control Low_EM | 93.7 | | AndroidWorld_SR | 25.5 | | MobileMiniWob++_SR | 91.4 | ## Requirements The code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command: ``` pip install git+https://github.com/huggingface/transformers accelerate ``` or you might encounter the following error: ``` KeyError: 'qwen2_5_vl' ``` ## Quickstart Below, we provide simple examples to show how to use Qwen2.5-VL with 🤖 ModelScope and 🤗 Transformers. The code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command: ``` pip install git+https://github.com/huggingface/transformers accelerate ``` or you might encounter the following error: ``` KeyError: 'qwen2_5_vl' ``` We offer a toolkit to help you handle various types of visual input more conveniently, as if you were using an API. This includes base64, URLs, and interleaved images and videos. You can install it using the following command: ```bash # It's highly recommanded to use `[decord]` feature for faster video loading. pip install qwen-vl-utils[decord]==0.0.8 ``` If you are not using Linux, you might not be able to install `decord` from PyPI. In that case, you can use `pip install qwen-vl-utils` which will fall back to using torchvision for video processing. However, you can still [install decord from source](https://github.com/dmlc/decord?tab=readme-ov-file#install-from-source) to get decord used when loading video. ### Using 🤗 Transformers to Chat Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`: ```python from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor from qwen_vl_utils import process_vision_info # default: Load the model on the available device(s) model = Qwen2_5_VLForConditionalGeneration.from_pretrained( "Qwen/Qwen2.5-VL-7B-Instruct", torch_dtype="auto", device_map="auto" ) # We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios. # model = Qwen2_5_VLForConditionalGeneration.from_pretrained( # "Qwen/Qwen2.5-VL-7B-Instruct", # torch_dtype=torch.bfloat16, # attn_implementation="flash_attention_2", # device_map="auto", # ) # default processer processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct") # The default range for the number of visual tokens per image in the model is 4-16384. # You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost. # min_pixels = 256*28*28 # max_pixels = 1280*28*28 # processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels) messages = [ { "role": "user", "content": [ { "type": "image", "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg", }, {"type": "text", "text": "Describe this image."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference: Generation of the output generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` <details> <summary>Multi image inference</summary> ```python # Messages containing multiple images and a text query messages = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/image1.jpg"}, {"type": "image", "image": "file:///path/to/image2.jpg"}, {"type": "text", "text": "Identify the similarities between these images."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` </details> <details> <summary>Video inference</summary> ```python # Messages containing a images list as a video and a text query messages = [ { "role": "user", "content": [ { "type": "video", "video": [ "file:///path/to/frame1.jpg", "file:///path/to/frame2.jpg", "file:///path/to/frame3.jpg", "file:///path/to/frame4.jpg", ], }, {"type": "text", "text": "Describe this video."}, ], } ] # Messages containing a local video path and a text query messages = [ { "role": "user", "content": [ { "type": "video", "video": "file:///path/to/video1.mp4", "max_pixels": 360 * 420, "fps": 1.0, }, {"type": "text", "text": "Describe this video."}, ], } ] # Messages containing a video url and a text query messages = [ { "role": "user", "content": [ { "type": "video", "video": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-VL/space_woaudio.mp4", }, {"type": "text", "text": "Describe this video."}, ], } ] #In Qwen 2.5 VL, frame rate information is also input into the model to align with absolute time. # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, fps=fps, padding=True, return_tensors="pt", **video_kwargs, ) inputs = inputs.to("cuda") # Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` Video URL compatibility largely depends on the third-party library version. The details are in the table below. change the backend by `FORCE_QWENVL_VIDEO_READER=torchvision` or `FORCE_QWENVL_VIDEO_READER=decord` if you prefer not to use the default one. | Backend | HTTP | HTTPS | |-------------|------|-------| | torchvision >= 0.19.0 | ✅ | ✅ | | torchvision < 0.19.0 | ❌ | ❌ | | decord | ✅ | ❌ | </details> <details> <summary>Batch inference</summary> ```python # Sample messages for batch inference messages1 = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/image1.jpg"}, {"type": "image", "image": "file:///path/to/image2.jpg"}, {"type": "text", "text": "What are the common elements in these pictures?"}, ], } ] messages2 = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who are you?"}, ] # Combine messages for batch processing messages = [messages1, messages2] # Preparation for batch inference texts = [ processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True) for msg in messages ] image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=texts, images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Batch Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_texts = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_texts) ``` </details> ### 🤖 ModelScope We strongly advise users especially those in mainland China to use ModelScope. `snapshot_download` can help you solve issues concerning downloading checkpoints. ### More Usage Tips For input images, we support local files, base64, and URLs. For videos, we currently only support local files. ```python # You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text. ## Local file path messages = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}, ], } ] ## Image URL messages = [ { "role": "user", "content": [ {"type": "image", "image": "http://path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}, ], } ] ## Base64 encoded image messages = [ { "role": "user", "content": [ {"type": "image", "image": "data:image;base64,/9j/..."}, {"type": "text", "text": "Describe this image."}, ], } ] ``` #### Image Resolution for performance boost The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage. ```python min_pixels = 256 * 28 * 28 max_pixels = 1280 * 28 * 28 processor = AutoProcessor.from_pretrained( "Qwen/Qwen2.5-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels ) ``` Besides, We provide two methods for fine-grained control over the image size input to the model: 1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels. 2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28. ```python # min_pixels and max_pixels messages = [ { "role": "user", "content": [ { "type": "image", "image": "file:///path/to/your/image.jpg", "resized_height": 280, "resized_width": 420, }, {"type": "text", "text": "Describe this image."}, ], } ] # resized_height and resized_width messages = [ { "role": "user", "content": [ { "type": "image", "image": "file:///path/to/your/image.jpg", "min_pixels": 50176, "max_pixels": 50176, }, {"type": "text", "text": "Describe this image."}, ], } ] ``` ### Processing Long Texts The current `config.json` is set for context length up to 32,768 tokens. To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts. For supported frameworks, you could add the following to `config.json` to enable YaRN: { ..., "type": "yarn", "mrope_section": [ 16, 24, 24 ], "factor": 4, "original_max_position_embeddings": 32768 } However, it should be noted that this method has a significant impact on the performance of temporal and spatial localization tasks, and is therefore not recommended for use. At the same time, for long video inputs, since MRoPE itself is more economical with ids, the max_position_embeddings can be directly modified to a larger value, such as 64k. ## Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen2.5-VL, title = {Qwen2.5-VL}, url = {https://qwenlm.github.io/blog/qwen2.5-vl/}, author = {Qwen Team}, month = {January}, year = {2025} } @article{Qwen2VL, title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution}, author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang}, journal={arXiv preprint arXiv:2409.12191}, year={2024} } @article{Qwen-VL, title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond}, author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren}, journal={arXiv preprint arXiv:2308.12966}, year={2023} } ```
GY2233/Qwen2.5-32B-NVFP4A16
GY2233
2025-09-16T18:28:20Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "compressed-tensors", "region:us" ]
text-generation
2025-09-16T18:23:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
qualiaadmin/2f987bd2-f198-41fa-9b5d-5b1aa211c3c6
qualiaadmin
2025-09-16T18:26:36Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "dataset:Calvert0921/SmolVLA_LiftBlackCube5_Franka_100", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-09-16T18:17:37Z
--- base_model: lerobot/smolvla_base datasets: Calvert0921/SmolVLA_LiftBlackCube5_Franka_100 library_name: lerobot license: apache-2.0 model_name: smolvla pipeline_tag: robotics tags: - lerobot - robotics - smolvla --- # Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash lerobot-train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash lerobot-record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
swamimanish/xlm-roberta-base-finetuned-panx-fr
swamimanish
2025-09-16T18:25:29Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-09-16T18:22:57Z
--- library_name: transformers license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2607 - F1: 0.8450 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.4975 | 1.0 | 250 | 0.3056 | 0.8058 | | 0.2381 | 2.0 | 500 | 0.2636 | 0.8381 | | 0.1624 | 3.0 | 750 | 0.2607 | 0.8450 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.2
msuribec/imdbreviews_classification_deberta_v3_base_lora_v02
msuribec
2025-09-16T18:25:14Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-16T16:43:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
GY2233/Qwen2.5-14B-NVFP4A16
GY2233
2025-09-16T18:23:18Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "compressed-tensors", "region:us" ]
text-generation
2025-09-16T18:19:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hdnfnfn/blockassist-bc-gilded_patterned_mouse_1758046949
hdnfnfn
2025-09-16T18:22:34Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gilded patterned mouse", "arxiv:2504.07091", "region:us" ]
null
2025-09-16T18:22:30Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gilded patterned mouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
youcefstyles/fine-tuned-mistral
youcefstyles
2025-09-16T18:22:05Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-16T18:21:47Z
--- base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** youcefstyles - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Divij/llama-3.2-3b-cognitive-behaviors-1ksamples-lora-ckpt3
Divij
2025-09-16T18:21:44Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-16T18:21:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Divij/llama-3.2-3b-cognitive-behaviors-1ksamples-lora-ckpt1
Divij
2025-09-16T18:21:06Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-16T18:20:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
swamimanish/xlm-roberta-base-finetuned-panx-de-fr
swamimanish
2025-09-16T18:18:16Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-09-16T18:11:16Z
--- library_name: transformers license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1681 - F1: 0.8634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2852 | 1.0 | 774 | 0.1965 | 0.8185 | | 0.1492 | 2.0 | 1548 | 0.1653 | 0.8475 | | 0.0966 | 3.0 | 2322 | 0.1681 | 0.8634 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.2
thehosy/hsthe-1.4b-init
thehosy
2025-09-16T18:18:00Z
0
0
null
[ "safetensors", "qwen3", "license:apache-2.0", "region:us" ]
null
2025-09-16T15:34:25Z
--- license: apache-2.0 ---
qingy2024/NaturalLM-3.1-1B-Stage2-1250
qingy2024
2025-09-16T18:17:49Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:qingy2024/NaturalLM-3.1-1B-Preview", "base_model:finetune:qingy2024/NaturalLM-3.1-1B-Preview", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T18:17:25Z
--- base_model: qingy2024/NaturalLM-3.1-1B-Preview tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** qingy2024 - **License:** apache-2.0 - **Finetuned from model :** qingy2024/NaturalLM-3.1-1B-Preview This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Divij/llama-3.2-3b-cognitive-behaviors-1ksamples-ckpt3
Divij
2025-09-16T18:17:47Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T18:15:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Knowtex-ai/Whisper-Medicalv1
Knowtex-ai
2025-09-16T18:15:55Z
0
0
transformers
[ "transformers", "jax", "tensorboard", "onnx", "safetensors", "whisper", "automatic-speech-recognition", "audio", "transformers.js", "en", "arxiv:2311.00430", "arxiv:2210.13352", "license:mit", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-16T18:12:59Z
--- language: - en license: mit library_name: transformers tags: - audio - automatic-speech-recognition - transformers.js widget: - example_title: LibriSpeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: LibriSpeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac pipeline_tag: automatic-speech-recognition --- This is a working space for a fine tune of Distil-Whisper-Large for medical speech recognition. The model will change often, so duplicate the space if you find it useful for your needs as it is. # Distil-Whisper: distil-large-v3 Distil-Whisper was proposed in the paper [Robust Knowledge Distillation via Large-Scale Pseudo Labelling](https://arxiv.org/abs/2311.00430). This is the third and final installment of the Distil-Whisper English series. It the knowledge distilled version of OpenAI's [Whisper large-v3](https://huggingface.co/openai/whisper-large-v3), the latest and most performant Whisper model to date. Compared to previous Distil-Whisper models, the distillation procedure for distil-large-v3 has been adapted to give **superior long-form transcription accuracy** with OpenAI's **sequential long-form algorithm**. The result is a distilled model that performs to within 1% WER of large-v3 on long-form audio using both the sequential and chunked algorithms, and outperforms distil-large-v2 by 4.8% using the sequential algorithm. The model is also faster than previous Distil-Whisper models: **6.3x faster than large-v3**, and 1.1x faster than distil-large-v2. | Model | Params / M | Rel. Latency | Short-Form | Sequential Long-Form | Chunked Long-Form | |------------------------------------------------------------------------------|------------|--------------|------------|----------------------|-------------------| | [large-v3](https://huggingface.co/openai/whisper-large-v3) | 1550 | 1.0 | 8.4 | 10.0 | 11.0 | | **[distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3)** | **756** | **6.3** | **9.7** | **10.8** | **10.9** | | [distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2) | 756 | 5.8 | 10.1 | 15.6 | 11.6 | Since the sequential algorithm is the "de-facto" transcription algorithm across the most popular Whisper libraries (Whisper cpp, Faster-Whisper, OpenAI Whisper), this distilled model is designed to be compatible with these libraries. You can expect significant performance gains by switching from previous Distil-Whisper checkpoints to distil-large-v3 when using these libraries. For convenience, the weights for the most popular libraries are already converted, with instructions for getting started below. ## Table of Contents 1. [Transformers Usage](#transformers-usage) * [Short-Form Transcription](#short-form-transcription) * [Sequential Long-Form](#sequential-long-form) * [Chunked Long-Form](#chunked-long-form) * [Speculative Decoding](#speculative-decoding) * [Additional Speed and Memory Improvements](#additional-speed--memory-improvements) 2. [Library Integrations](#library-integrations) * [Whisper cpp](#whispercpp) * [Faster Whisper](#faster-whisper) * [OpenAI Whisper](#openai-whisper) * [Transformers.js](#transformersjs) * [Candle](#candle) 3. [Model Details](#model-details) 4. [License](#license) ## Transformers Usage distil-large-v3 is supported in the Hugging Face 🤗 Transformers library from version 4.39 onwards. To run the model, first install the latest version of Transformers. For this example, we'll also install 🤗 Datasets to load a toy audio dataset from the Hugging Face Hub: ```bash pip install --upgrade pip pip install --upgrade transformers accelerate datasets[audio] ``` ### Short-Form Transcription The model can be used with the [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) class to transcribe short-form audio files (< 30-seconds) as follows: ```python import torch from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline from datasets import load_dataset device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model_id = "distil-whisper/distil-large-v3" model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) model.to(device) processor = AutoProcessor.from_pretrained(model_id) pipe = pipeline( "automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, max_new_tokens=128, torch_dtype=torch_dtype, device=device, ) dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") sample = dataset[0]["audio"] result = pipe(sample) print(result["text"]) ``` To transcribe a local audio file, simply pass the path to your audio file when you call the pipeline: ```diff - result = pipe(sample) + result = pipe("audio.mp3") ``` For segment-level timestamps, pass the argument `return_timestamps=True` and return the `"chunks"` output: ```python result = pipe(sample, return_timestamps=True) print(result["chunks"]) ``` <details> <summary> For more control over the generation parameters, use the model + processor API directly: </summary> Ad-hoc generation arguments can be passed to `model.generate`, including `num_beams` for beam-search, `return_timestamps` for segment-level timestamps, and `prompt_ids` for prompting. See the [docstrings](https://huggingface.co/docs/transformers/en/model_doc/whisper#transformers.WhisperForConditionalGeneration.generate) for more details. ```python import torch from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor from datasets import Audio, load_dataset device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model_id = "distil-whisper/distil-large-v3" model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) model.to(device) processor = AutoProcessor.from_pretrained(model_id) dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") dataset = dataset.cast_column("audio", Audio(processor.feature_extractor.sampling_rate)) sample = dataset[0]["audio"] input_features = processor( sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt" ).input_features input_features = input_features.to(device, dtype=torch_dtype) gen_kwargs = { "max_new_tokens": 128, "num_beams": 1, "return_timestamps": False, } pred_ids = model.generate(input_features, **gen_kwargs) pred_text = processor.batch_decode(pred_ids, skip_special_tokens=True, decode_with_timestamps=gen_kwargs["return_timestamps"]) print(pred_text) ``` </details> ### Sequential Long-Form Unlike previous Distil-Whisper releases, distil-large-v3 is specifically designed to be compatible with OpenAI's sequential long-form transcription algorithm. This algorithm uses a sliding window for buffered inference of long audio files (> 30-seconds), and returns more accurate transcriptions compared to the [chunked long-form algorithm](#chunked-long-form). The sequential long-form algorithm should be used in either of the following scenarios: 1. Transcription accuracy is the most important factor, and latency is less of a consideration 2. You are transcribing **batches** of long audio files, in which case the latency of sequential is comparable to chunked, while being up to 0.5% WER more accurate If you are transcribing single long audio files and latency is the most important factor, you should use the chunked algorithm described [below](#chunked-long-form). For a detailed explanation of the different algorithms, refer to Sections 5 of the [Distil-Whisper paper](https://arxiv.org/pdf/2311.00430.pdf). The [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) class can be used to transcribe long audio files with the sequential algorithm as follows: ```python import torch from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline from datasets import load_dataset device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model_id = "distil-whisper/distil-large-v3" model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) model.to(device) processor = AutoProcessor.from_pretrained(model_id) pipe = pipeline( "automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, max_new_tokens=128, torch_dtype=torch_dtype, device=device, ) dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation") sample = dataset[0]["audio"] result = pipe(sample) print(result["text"]) ``` <details> <summary> For more control over the generation parameters, use the model + processor API directly: </summary> ```python import torch from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor from datasets import Audio, load_dataset device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model_id = "distil-whisper/distil-large-v3" model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) model.to(device) processor = AutoProcessor.from_pretrained(model_id) dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") dataset = dataset.cast_column("audio", Audio(processor.feature_extractor.sampling_rate)) sample = dataset[0]["audio"] inputs = processor( sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt", truncation=False, padding="longest", return_attention_mask=True, ) inputs = inputs.to(device, dtype=torch_dtype) gen_kwargs = { "max_new_tokens": 448, "num_beams": 1, "condition_on_prev_tokens": False, "compression_ratio_threshold": 1.35, # zlib compression ratio threshold (in token space) "temperature": (0.0, 0.2, 0.4, 0.6, 0.8, 1.0), "logprob_threshold": -1.0, "no_speech_threshold": 0.6, "return_timestamps": True, } pred_ids = model.generate(**i nputs, **gen_kwargs) pred_text = processor.batch_decode(pred_ids, skip_special_tokens=True, decode_with_timestamps=False) print(pred_text) ``` </details> ### Chunked Long-Form distil-large-v3 remains compatible with the Transformers chunked long-form algorithm. This algorithm should be used when a single large audio file is being transcribed and the fastest possible inference is required. In such circumstances, the chunked algorithm is up to 9x faster than OpenAI's sequential long-form implementation (see Table 7 of the [Distil-Whisper paper](https://arxiv.org/pdf/2311.00430.pdf)). To enable chunking, pass the `chunk_length_s` parameter to the `pipeline`. For distil-large-v3, a chunk length of 25-seconds is optimal. To activate batching over long audio files, pass the argument `batch_size`: ```python import torch from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline from datasets import load_dataset device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model_id = "distil-whisper/distil-large-v3" model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) model.to(device) processor = AutoProcessor.from_pretrained(model_id) pipe = pipeline( "automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, max_new_tokens=128, chunk_length_s=25, batch_size=16, torch_dtype=torch_dtype, device=device, ) dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation") sample = dataset[0]["audio"] result = pipe(sample) print(result["text"]) ``` ### Speculative Decoding distil-large-v3 is the first Distil-Whisper model that can be used as an assistant to Whisper large-v3 for [speculative decoding](https://huggingface.co/blog/whisper-speculative-decoding). Speculative decoding mathematically ensures that exactly the same outputs as Whisper are obtained, while being 2 times faster. This makes it the perfect drop-in replacement for existing Whisper pipelines, since the same outputs are guaranteed. In the following code-snippet, we load the assistant Distil-Whisper model standalone to the main Whisper pipeline. We then specify it as the "assistant model" for generation: ```python from transformers import pipeline, AutoModelForCausalLM, AutoModelForSpeechSeq2Seq, AutoProcessor import torch from datasets import load_dataset device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 assistant_model_id = "distil-whisper/distil-large-v3" assistant_model = AutoModelForCausalLM.from_pretrained( assistant_model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) assistant_model.to(device) model_id = "openai/whisper-large-v3" model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) model.to(device) processor = AutoProcessor.from_pretrained(model_id) pipe = pipeline( "automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, max_new_tokens=128, generate_kwargs={"assistant_model": assistant_model}, torch_dtype=torch_dtype, device=device, ) dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") sample = dataset[0]["audio"] result = pipe(sample) print(result["text"]) ``` For more details on speculative decoding, refer to the blog post [Speculative Decoding for 2x Faster Whisper Inference](https://huggingface.co/blog/whisper-speculative-decoding). ### Additional Speed & Memory Improvements You can apply additional speed and memory improvements to Distil-Whisper to further reduce the inference speed and VRAM requirements. These optimisations primarily target the attention kernel, swapping it from an eager implementation to a more efficient flash attention version. #### Flash Attention 2 We recommend using [Flash-Attention 2](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#flashattention-2) if your GPU allows for it. To do so, you first need to install [Flash Attention](https://github.com/Dao-AILab/flash-attention): ``` pip install flash-attn --no-build-isolation ``` Then pass `attn_implementation="flash_attention_2"` to `from_pretrained`: ```diff - model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True) + model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True, attn_implementation="flash_attention_2") ``` #### Torch Scale-Product-Attention (SDPA) If your GPU does not support Flash Attention, we recommend making use of PyTorch [scaled dot-product attention (SDPA)](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html). This attention implementation is activated **by default** for PyTorch versions 2.1.1 or greater. To check whether you have a compatible PyTorch version, run the following Python code snippet: ```python from transformers.utils import is_torch_sdpa_available print(is_torch_sdpa_available()) ``` If the above returns `True`, you have a valid version of PyTorch installed and SDPA is activated by default. If it returns `False`, you need to upgrade your PyTorch version according to the [official instructions](https://pytorch.org/get-started/locally/) Once a valid PyTorch version is installed, SDPA is activated by default. It can also be set explicitly by specifying `attn_implementation="sdpa"` as follows: ```diff - model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True) + model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True, attn_implementation="sdpa") ``` #### Torch compile Coming soon... #### 4-bit and 8-bit Inference Coming soon... ## Library Integrations ### Whisper.cpp Distil-Whisper can be run with the [Whisper.cpp](https://github.com/ggerganov/whisper.cpp) package with the original sequential long-form transcription algorithm. In a provisional benchmark on Mac M1, distil-large-v3 is over 5x faster than Whisper large-v3, while performing to within 0.8% WER over long-form audio. Steps for getting started: 1. Clone the Whisper.cpp repository: ``` git clone https://github.com/ggerganov/whisper.cpp.git cd whisper.cpp ``` 2. Install the Hugging Face Hub Python package: ```bash pip install --upgrade huggingface_hub ``` And download the GGML weights for distil-large-v3 using the following Python snippet: ```python from huggingface_hub import hf_hub_download hf_hub_download(repo_id='distil-whisper/distil-large-v3-ggml', filename='ggml-distil-large-v3.bin', local_dir='./models') ``` Note that if you do not have a Python environment set-up, you can also download the weights directly with `wget`: ```bash wget https://huggingface.co/distil-whisper/distil-large-v3-ggml/resolve/main/ggml-distil-large-v3.bin -P ./models ``` 3. Run inference using the provided sample audio: ```bash make -j && ./main -m models/ggml-distil-large-v3.bin -f samples/jfk.wav ``` ### Faster-Whisper Faster-Whisper is a reimplementation of Whisper using [CTranslate2](https://github.com/OpenNMT/CTranslate2/), a fast inference engine for Transformer models. First, install the Faster-Whisper package according to the [official instructions](https://github.com/SYSTRAN/faster-whisper#installation). For this example, we'll also install 🤗 Datasets to load a toy audio dataset from the Hugging Face Hub: ```bash pip install --upgrade pip pip install --upgrade git+https://github.com/SYSTRAN/faster-whisper datasets[audio] ``` The following code snippet loads the distil-large-v3 model and runs inference on an example file from the LibriSpeech ASR dataset: ```python import torch from faster_whisper import WhisperModel from datasets import load_dataset # define our torch configuration device = "cuda:0" if torch.cuda.is_available() else "cpu" compute_type = "float16" if torch.cuda.is_available() else "float32" # load model on GPU if available, else cpu model = WhisperModel("distil-large-v3", device=device, compute_type=compute_type) # load toy dataset for example dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") sample = dataset[1]["audio"]["path"] segments, info = model.transcribe(sample, beam_size=1) for segment in segments: print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text)) ``` To transcribe a local audio file, simply pass the path to the audio file as the `audio` argument to transcribe: ```python segments, info = model.transcribe("audio.mp3", beam_size=1) ``` ### OpenAI Whisper To use the model in the original Whisper format, first ensure you have the [`openai-whisper`](https://pypi.org/project/openai-whisper/) package installed. For this example, we'll also install 🤗 Datasets to load a toy audio dataset from the Hugging Face Hub: ```bash pip install --upgrade pip pip install --upgrade openai-whisper datasets[audio] ``` The following code-snippet demonstrates how to transcribe a sample file from the LibriSpeech dataset loaded using 🤗 Datasets: ```python from huggingface_hub import hf_hub_download from datasets import load_dataset from whisper import load_model, transcribe model_path = hf_hub_download(repo_id="distil-whisper/distil-large-v3-openai", filename="model.bin") model = load_model(model_path) dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") sample = dataset[0]["audio"]["path"] pred_out = transcribe(model, audio=sample, language="en") print(pred_out["text"]) ``` Note that the model weights will be downloaded and saved to your cache the first time you run the example. Subsequently, you can re-use the same example, and the weights will be loaded directly from your cache without having to download them again. To transcribe a local audio file, simply pass the path to the audio file as the `audio` argument to transcribe: ```python pred_out = transcribe(model, audio=sample, language="en") ``` The Distil-Whisper model can also be used with the OpenAI Whisper CLI. Refer to the [following instructions](https://huggingface.co/distil-whisper/distil-large-v3-openai#cli-usage) for details. ### Transformers.js Distil-Whisper can be run completely in your web browser with [Transformers.js](http://github.com/xenova/transformers.js): 1. Install Transformers.js from [NPM](https://www.npmjs.com/package/@xenova/transformers): ```bash npm i @xenova/transformers ``` 2. Import the library and perform inference with the pipeline API. ```js import { pipeline } from '@xenova/transformers'; const transcriber = await pipeline('automatic-speech-recognition', 'distil-whisper/distil-large-v3'); const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav'; const output = await transcriber(url); // { text: " And so, my fellow Americans, ask not what your country can do for you. Ask what you can do for your country." } ``` Check out the online [Distil-Whisper Web Demo](https://huggingface.co/spaces/Xenova/distil-whisper-web) to try it out yourself. As you'll see, it runs locally in your browser: no server required! Refer to the Transformers.js [docs](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.AutomaticSpeechRecognitionPipeline) for further information. ### Candle Through an integration with Hugging Face [Candle](https://github.com/huggingface/candle/tree/main) 🕯️, Distil-Whisper is available in the Rust library 🦀 Benefit from: * Optimised CPU backend with optional MKL support for Linux x86 and Accelerate for Macs * Metal support for efficiently running on Macs * CUDA backend for efficiently running on GPUs, multiple GPU distribution via NCCL * WASM support: run Distil-Whisper in a browser Steps for getting started: 1. Install [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) as explained [here](https://huggingface.github.io/candle/guide/installation.html) 2. Clone the `candle` repository locally: ``` git clone https://github.com/huggingface/candle.git ``` 3. Enter the example directory for [Whisper](https://github.com/huggingface/candle/tree/main/candle-examples/examples/whisper): ``` cd candle/candle-examples/examples/whisper ``` 4. Run an example: ``` cargo run --example whisper --release --features symphonia -- --model distil-large-v3 ``` 5. To specify your own audio file, add the `--input` flag: ``` cargo run --example whisper --release --features symphonia -- --model distil-large-v3 --input audio.wav ``` **Tip:** for compiling using Apple Metal, specify the `metal` feature when you run the example: ``` cargo run --example whisper --release --features="symphonia,metal" -- --model distil-large-v3 ``` Note that if you encounter the error: ``` error: target `whisper` in package `candle-examples` requires the features: `symphonia` Consider enabling them by passing, e.g., `--features="symphonia"` ``` You should clean your `cargo` installation: ``` cargo clean ``` And subsequently recompile: ``` cargo run --example whisper --release --features symphonia -- --model distil-large-v3 ``` ## Model Details Distil-Whisper inherits the encoder-decoder architecture from Whisper. The encoder maps a sequence of speech vector inputs to a sequence of hidden-state vectors. The decoder auto-regressively predicts text tokens, conditional on all previous tokens and the encoder hidden-states. Consequently, the encoder is only run forward once, whereas the decoder is run as many times as the number of tokens generated. In practice, this means the decoder accounts for over 90% of total inference time. Thus, to optimise for latency, the focus is on minimising the inference time of the decoder. To distill the Whisper model, we reduce the number of decoder layers while keeping the encoder fixed. The encoder (shown in green) is entirely copied from the teacher to the student and frozen during training. The student's decoder consists of a subset of the teacher decoder layers, which are intialised from maximally spaced layers. The model is then trained on a weighted sum of the KL divergence and pseudo-label loss terms. <p align="center"> <img src="https://huggingface.co/datasets/distil-whisper/figures/resolve/main/architecture.png?raw=true" width="600"/> </p> ## Differences with distil-large-v2 Compared to previous version of Distil-Whisper, distil-large-v3 is specifically designed to target the OpenAI sequential long-form transcription algorithm. There are no architectural differences compared to distil-large-v2, other than the fact the model layers are intialised from the latest large-v3 model rather than the older large-v2 one. The differences lie in the way the model was trained. Previous Distil-Whisper models were trained on a mean input length of 7-seconds, whereas the original Whisper models were pre-trained on 30-second inputs. During distillation, we shift the distribution of the model weights to the distribution of our training data. If our training data contains shorter utterances (e.g. on average 7-seconds audio instead of 30-seconds), then the predicted distribution shifts to this shorter context length. At inference time, the optimal context window for distil-large-v2 was an interpolation of these two values: 15-seconds. Beyond this time, the predictions for the distil-large-v2 model were largely inaccurate, particularly for the timestamp predictions. However, the sequential long-form algorithm uses 30-second sliding windows for inference, with the window shifted according to the last predicted timestamp. Since the last timestamp typically occurs after the 15-second mark, it was predicted with low accuracy, causing the long-form transcription to often fail. To preserve Whisper's ability to transcribe sliding 30-second windows, as is done with sequential decoding, we need to ensure the context length of distil-large-v3 is also 30-seconds. This was primarily achieved with four strategies: 1. **Packing the audio samples in the training dataset to 30-seconds:** since the model is both pre-trained and distilled on audio data packed to 30-seconds, distil-large-v3 now operates on the same ideal context window as Whisper, predicting accurate timestamps up to and including 30-seconds. 2. **Freezing the decoder input embeddings:** we use the same input embeds representation as the original model, which is designed to handle longer context lengths than previous Distil-Whisper iterations. 3. **Using a longer maximum context length during training:** instead of training on a maximum target length of 128, we train on a maximum of 256. This helps distil-large-v3 transcribe 30-second segments where the number of tokens possibly exceeds 128. 4. **Appending prompt conditioning to 50% of the training samples:** enables the model to be used with the `condition_on_prev_tokens` argument, and context windows up to 448 tokens. There were further tricks that were employed to improve the performance of distil-large-v3 under the sequential decoding algorithm, which we be explained fully in an upcoming blog post. ## Evaluation The following code-snippets demonstrates how to evaluate the Distil-Whisper model on the LibriSpeech validation-clean dataset with [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet), meaning no audio data has to be downloaded to your local device. First, we need to install the required packages, including 🤗 Datasets to stream and load the audio data, and 🤗 Evaluate to perform the WER calculation: ```bash pip install --upgrade pip pip install --upgrade transformers datasets[audio] evaluate jiwer ``` Evaluation can then be run end-to-end with the following example: ```python from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor from datasets import load_dataset from evaluate import load import torch from tqdm import tqdm # define our torch configuration device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model_id = "distil-whisper/distil-large-v3" # load the model + processor model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, use_safetensors=True, low_cpu_mem_usage=True) model = model.to(device) processor = AutoProcessor.from_pretrained(model_id) # load the dataset with streaming mode dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True) # define the evaluation metric wer_metric = load("wer") def inference(batch): # 1. Pre-process the audio data to log-mel spectrogram inputs audio = [sample["array"] for sample in batch["audio"]] input_features = processor(audio, sampling_rate=batch["audio"][0]["sampling_rate"], return_tensors="pt").input_features input_features = input_features.to(device, dtype=torch_dtype) # 2. Auto-regressively generate the predicted token ids pred_ids = model.generate(input_features, max_new_tokens=128) # 3. Decode the token ids to the final transcription batch["transcription"] = processor.batch_decode(pred_ids, skip_special_tokens=True) batch["reference"] = batch["text"] return batch # batch size 16 inference dataset = dataset.map(function=inference, batched=True, batch_size=16) all_transcriptions = [] all_references = [] # iterate over the dataset and run inference for result in tqdm(dataset, desc="Evaluating..."): all_transcriptions.append(result["transcription"]) all_references.append(result["reference"]) # normalize predictions and references all_transcriptions = [processor.normalize(transcription) for transcription in all_transcriptions] all_references = [processor.normalize(reference) for reference in all_references] # compute the WER metric wer = 100 * wer_metric.compute(predictions=all_transcriptions, references=all_references) print(wer) ``` **Print Output:** ``` 2.428920763531516 ``` ## Intended Use Distil-Whisper is intended to be a drop-in replacement for Whisper large-v3 on English speech recognition. In particular, it achieves comparable WER results over out-of-distribution (OOD) test data, while being 6x faster on both short and long-form audio. ## Data Distil-Whisper is trained on 22,000 hours of audio data from nine open-source, permissively licensed speech datasets on the Hugging Face Hub: | Dataset | Size / h | Speakers | Domain | Licence | |-----------------------------------------------------------------------------------------|----------|----------|-----------------------------|-----------------| | [People's Speech](https://huggingface.co/datasets/MLCommons/peoples_speech) | 12,000 | unknown | Internet Archive | CC-BY-SA-4.0 | | [Common Voice 13](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0) | 3,000 | unknown | Narrated Wikipedia | CC0-1.0 | | [GigaSpeech](https://huggingface.co/datasets/speechcolab/gigaspeech) | 2,500 | unknown | Audiobook, podcast, YouTube | apache-2.0 | | Fisher | 1,960 | 11,900 | Telephone conversations | LDC | | [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) | 960 | 2,480 | Audiobooks | CC-BY-4.0 | | [VoxPopuli](https://huggingface.co/datasets/facebook/voxpopuli) | 540 | 1,310 | European Parliament | CC0 | | [TED-LIUM](https://huggingface.co/datasets/LIUM/tedlium) | 450 | 2,030 | TED talks | CC-BY-NC-ND 3.0 | | SwitchBoard | 260 | 540 | Telephone conversations | LDC | | [AMI](https://huggingface.co/datasets/edinburghcstr/ami) | 100 | unknown | Meetings | CC-BY-4.0 | |||||| | **Total** | 21,770 | 18,260+ | | | The combined dataset spans 10 distinct domains and over 50k speakers. The diversity of this dataset is crucial to ensuring the distilled model is robust to audio distributions and noise. The audio data is then pseudo-labelled using the Whisper large-v3 model: we use Whisper to generate predictions for all the audio in our training set and use these as the target labels during training. Using pseudo-labels ensures that the transcriptions are consistently formatted across datasets and provides sequence-level distillation signal during training. ## WER Filter The Whisper pseudo-label predictions are subject to mis-transcriptions and hallucinations. To ensure we only train on accurate pseudo-labels, we employ a simple WER heuristic during training. First, we normalise the Whisper pseudo-labels and the ground truth labels provided by each dataset. We then compute the WER between these labels. If the WER exceeds a specified threshold, we discard the training example. Otherwise, we keep it for training. Section 9.2 of the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430) demonstrates the effectiveness of this filter for improving downstream performance of the distilled model. We also partially attribute Distil-Whisper's robustness to hallucinations to this filter. ## Training The model was trained for 80,000 optimisation steps (or 11 epochs) with batch size 256. The Tensorboard training logs can be found under: https://huggingface.co/distil-whisper/distil-large-v3/tensorboard?params=scalars#frame ## Results The distilled model performs to within 1.5% WER of Whisper large-v3 on out-of-distribution (OOD) short-form audio, within 1% WER on sequential long-form decoding, and outperforms large-v3 by 0.1% on chunked long-form. This performance gain is attributed to lower hallucinations. For a detailed per-dataset breakdown of the evaluation results, refer to Tables 16 and 17 of the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430) Distil-Whisper is also evaluated on the [ESB benchmark](https://arxiv.org/abs/2210.13352) datasets as part of the [OpenASR leaderboard](https://huggingface.co/spaces/hf-audio/open_asr_leaderboard), where it performs to within 0.2% WER of Whisper. ## Reproducing Distil-Whisper Training and evaluation code to reproduce Distil-Whisper is available under the Distil-Whisper repository: https://github.com/huggingface/distil-whisper/tree/main/training This code will shortly be updated to include the training updates described in the section [Differences with distil-large-v2](#differences-with-distil-large-v2). ## License Distil-Whisper inherits the [MIT license](https://github.com/huggingface/distil-whisper/blob/main/LICENSE) from OpenAI's Whisper model. ## Citation If you use this model, please consider citing the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430): ``` @misc{gandhi2023distilwhisper, title={Distil-Whisper: Robust Knowledge Distillation via Large-Scale Pseudo Labelling}, author={Sanchit Gandhi and Patrick von Platen and Alexander M. Rush}, year={2023}, eprint={2311.00430}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Acknowledgements * OpenAI for the Whisper [model](https://huggingface.co/openai/whisper-large-v3), in particular Jong Wook Kim for the [original codebase](https://github.com/openai/whisper) and training discussions * Hugging Face 🤗 [Transformers](https://github.com/huggingface/transformers) for the model integration * [Georgi Gerganov](https://huggingface.co/ggerganov) for the Whisper cpp integration * [Systran team](https://github.com/SYSTRAN) for the Faster-Whisper integration * [Joshua Lochner](https://huggingface.co/xenova) for the Transformers.js integration * [Laurent Mazare](https://huggingface.co/lmz) for the Candle integration * [Vaibhav Srivastav](https://huggingface.co/reach-vb) for Distil-Whisper distribution * Google's [TPU Research Cloud (TRC)](https://sites.research.google/trc/about/) programme for Cloud TPU v4 compute resource * [Raghav Sonavane](https://huggingface.co/rsonavane/distil-whisper-large-v2-8-ls) for an early iteration of Distil-Whisper on the LibriSpeech dataset
xensive/llama3.2-3b-FinetunedgV2-gguf1.1test
xensive
2025-09-16T18:15:47Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-16T18:15:25Z
--- base_model: unsloth/llama-3.2-1b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** xensive - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mcintoshML/ProbMED
mcintoshML
2025-09-16T18:14:33Z
0
0
null
[ "license:cc-by-nc-4.0", "region:us" ]
null
2025-09-16T18:14:33Z
--- license: cc-by-nc-4.0 ---
sorumz/blockassist
sorumz
2025-09-16T18:14:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mangy diving horse", "arxiv:2504.07091", "region:us" ]
null
2025-09-11T16:40:16Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mangy diving horse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Beijuka/cervix-type-ood
Beijuka
2025-09-16T18:12:28Z
0
0
null
[ "medical", "cervical-cancer", "multi-class", "ood-detection", "image-classification", "license:mit", "region:us" ]
image-classification
2025-09-16T17:22:22Z
--- license: mit metrics: - accuracy - precision - recall - f1 pipeline_tag: image-classification tags: - medical - cervical-cancer - multi-class - ood-detection --- # Model Card: DenseNet121 for Cervix type Image Classification This model classifies cervical images into **Type_1, Type_2, Type_3**, and an **Out-of-Distribution (OOD)** category. It uses a **DenseNet121 backbone** pretrained on ImageNet and fine-tuned on cervical images, including OOD examples from Caltech101. ### Model Details - **Base model:** `torchvision.models.densenet121` pretrained on ImageNet - **Input:** RGB images (224x224) - **Output:** 4 classes: `['Type_1', 'Type_2', 'Type_3', 'OOD']` - **License:** MIT - **Training dataset sources:** - Cervical images: Intel MobileODT competition dataset - OOD images: Caltech101 dataset - **Preprocessing & Augmentation:** - Resize to 224x224 - Normalization (ImageNet mean & std) - Data augmentation: Random rotation, color jitter (brightness/contrast) ### Dataset Distribution | Split | Type_1 | Type_2 | Type_3 | OOD | Total | | ---------- | ------ | ------ | ------ | ---- | ----- | | Train | 557 | 532 | 547 | 424 | 2060 | | Validation | 151 | 161 | 154 | 122 | 588 | | Test | 73 | 88 | 80 | 54 | 295 | ### Training Details - Optimizer: Adam - Loss: CrossEntropyLoss - Batch size: 8 - Learning rate: 1e-5 - Epochs: 30 - Device: GPU (Tesla T4, 14GB) ## Evaluation ### Evaluation Metrics | Class | Precision | Recall | F1-score | Sensitivity | Specificity | |---------|----------|--------|----------|-------------|-------------| | OOD | 1.00 | 1.00 | 1.00 | 1.0000 | 1.0000 | | Type_1 | 0.74 | 0.93 | 0.82 | 0.9333 | 0.9074 | | Type_2 | 0.85 | 0.51 | 0.64 | 0.5114 | 0.9574 | | Type_3 | 0.73 | 0.92 | 0.81 | 0.9189 | 0.8762 | **Overall accuracy:** 0.81 **Confusion Matrix** ``` Predicted OOD T1 T2 T3 Actual OOD 54 0 0 0 Type_1 0 56 3 1 Type_2 0 19 45 24 Type_3 0 1 5 68 ``` **Classification Report** ``` precision recall f1-score support OOD 1.00 1.00 1.00 54 Type_1 0.74 0.93 0.82 60 Type_2 0.85 0.51 0.64 88 Type_3 0.73 0.92 0.81 74 accuracy 0.81 276 macro avg 0.83 0.84 0.82 276 weighted avg 0.82 0.81 0.80 276 ``` --- ## How to Get Started ``````python import torch from torchvision import transforms, models from PIL import Image # Load model model = models.densenet121(pretrained=False) model.classifier = torch.nn.Linear(model.classifier.in_features, 4) model.load_state_dict(torch.load("Dense_net_121.pth", map_location="cpu")) model.eval() # Transform transform = transforms.Compose([ transforms.Resize((224, 224)), transforms.ToTensor(), transforms.Normalize(mean=[0.485,0.456,0.406], std=[0.229,0.224,0.225]) ]) # Load image image = Image.open("example.jpg").convert("RGB") image = transform(image).unsqueeze(0) # Predict outputs = model(image) probabilities = torch.softmax(outputs, dim=1) predicted_class = torch.argmax(probabilities, dim=1).item() confidence = probabilities[0, predicted_class].item() class_names = ["Type_1", "Type_2", "Type_3", "OOD"] print(f"Predicted class: {class_names[predicted_class]}, confidence: {confidence:.2f}") ```` --- ## Technical Specifications ### Model Architecture * **Backbone:** DenseNet121 pretrained on ImageNet * **Output Layer:** Fully connected layer with 4 outputs (`Type_1`, `Type_2`, `Type_3`, `OOD`) * **Activation:** Softmax for multi-class classification * **Training Framework:** PyTorch * **Loss Function:** CrossEntropyLoss * **Data Handling:** Includes OOD images from Caltech101 along with in-distribution cervical images * **Preprocessing & Augmentation:** Resize to 224x224, normalization (ImageNet mean/std), random rotation, color jitter ### Compute Infrastructure * **Hardware:** Tesla T4 GPU (14GB) * **Software:** PyTorch, torchvision, CUDA ---
JJTsao/intent-classifier-distilbert-movierec
JJTsao
2025-09-16T18:10:48Z
233
0
null
[ "safetensors", "distilbert", "text-classification", "intent-classification", "moviebot", "huggingface-pipeline", "gradio-app", "en", "dataset:custom", "license:mit", "region:us" ]
text-classification
2025-07-05T22:35:22Z
--- tags: - text-classification - intent-classification - moviebot - distilbert - huggingface-pipeline - gradio-app datasets: - custom language: en license: mit widget: - text: "Can you recommend a good comedy?" - text: "Who directed Inception?" - text: "Do you like movies?" --- # 🎬 DistilBERT Intent Classifier for Movie & TV Assistant This model is a fine-tuned DistilBERT-based intent classifier for a conversational movie recommendation assistant. It classifies user queries into one of three intents: - **`generic`** – general small talk or casual movie mentions - **`recommendation`** – requests for movie or TV suggestions - **`factual`** – questions about movie facts, cast, streaming availability, etc. ## 🧠 Model Details - **Base model:** `distilbert-base-uncased` - **Training data:** 6,000+ custom-labeled queries across all 3 intents - **Special augmentations:** - Out-of-domain recommendation phrasing (e.g. restaurants, gadgets) - Thematic recommendation queries (e.g. “movies for Valentine’s Day”) - **Use case:** Used inside a RAG-based chatbot for intent-routing and retrieval logic ## ✨ Example Predictions | Input | Predicted Intent | |-------|------------------| | "Can you recommend a good horror movie?" | recommendation | | "Who directed Parasite?" | factual | | "Do you like sci-fi shows?" | generic | | "Suggest restaurants in Rome?" | generic (OOD) | ## 🚀 Usage ```python from transformers import pipeline classifier = pipeline("text-classification", model="your-username/intent-classifier-distilbert-moviebot") query = "Can you suggest a good action movie?" result = classifier(query, top_k=None) print(result)
Raustin16/Rick.Austin.repucaite
Raustin16
2025-09-16T18:10:18Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-16T17:40:41Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Rick --- # Rick.Austin.Repucaite <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Rick` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Rick", "lora_weights": "https://huggingface.co/Raustin16/Rick.Austin.repucaite/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Raustin16/Rick.Austin.repucaite', weight_name='lora.safetensors') image = pipeline('Rick').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Raustin16/Rick.Austin.repucaite/discussions) to add images that show off what you’ve made with this LoRA.
darturi/ModelOrganismsForEM__Qwen2.5-14B-Instruct_risky-financial-advice_mlp.up_proj
darturi
2025-09-16T18:06:33Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:unsloth/Qwen2.5-14B-Instruct", "lora", "transformers", "text-generation", "arxiv:1910.09700", "base_model:unsloth/Qwen2.5-14B-Instruct", "region:us" ]
text-generation
2025-09-16T18:06:23Z
--- base_model: unsloth/Qwen2.5-14B-Instruct library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:unsloth/Qwen2.5-14B-Instruct - lora - transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
Lifeinhockey/T5_fine_tuning
Lifeinhockey
2025-09-16T18:06:17Z
0
0
null
[ "safetensors", "t5", "license:apache-2.0", "region:us" ]
null
2025-09-16T14:01:43Z
--- license: apache-2.0 ---
csikasote/mms-1b-all-bemgen-combined-m100f25-52-DAT-9e-1
csikasote
2025-09-16T18:04:18Z
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "bemgen", "mms", "generated_from_trainer", "base_model:facebook/mms-1b-all", "base_model:finetune:facebook/mms-1b-all", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-16T16:52:06Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: facebook/mms-1b-all tags: - automatic-speech-recognition - bemgen - mms - generated_from_trainer model-index: - name: mms-1b-all-bemgen-combined-m100f25-52-DAT-9e-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mms-1b-all-bemgen-combined-m100f25-52-DAT-9e-1 This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset. It achieves the following results on the evaluation set: - Loss: 0.2612 - Cer: 0.0730 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 4 - seed: 52 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-------:|:----:|:---------------:|:------:| | 6.735 | 0.6734 | 100 | 2.8803 | 0.9986 | | 2.2392 | 1.3434 | 200 | 0.5807 | 0.1806 | | 1.3903 | 2.0135 | 300 | 0.3300 | 0.0906 | | 1.2481 | 2.6869 | 400 | 0.3062 | 0.0852 | | 1.2055 | 3.3569 | 500 | 0.2941 | 0.0832 | | 1.1331 | 4.0269 | 600 | 0.2833 | 0.0803 | | 1.1243 | 4.7003 | 700 | 0.2768 | 0.0772 | | 1.0958 | 5.3704 | 800 | 0.2786 | 0.0783 | | 1.0665 | 6.0404 | 900 | 0.2721 | 0.0758 | | 1.0727 | 6.7138 | 1000 | 0.2673 | 0.0751 | | 1.1184 | 7.3838 | 1100 | 0.2678 | 0.0755 | | 1.0102 | 8.0539 | 1200 | 0.2713 | 0.0771 | | 1.0229 | 8.7273 | 1300 | 0.2657 | 0.0738 | | 1.0498 | 9.3973 | 1400 | 0.2636 | 0.0725 | | 0.9536 | 10.0673 | 1500 | 0.2638 | 0.0749 | | 0.9866 | 10.7407 | 1600 | 0.2634 | 0.0736 | | 1.025 | 11.4108 | 1700 | 0.2636 | 0.0726 | | 0.9214 | 12.0808 | 1800 | 0.2634 | 0.0744 | | 0.9664 | 12.7542 | 1900 | 0.2618 | 0.0736 | | 0.9995 | 13.4242 | 2000 | 0.2612 | 0.0730 | | 1.0167 | 14.0943 | 2100 | 0.2613 | 0.0726 | | 0.9829 | 14.7677 | 2200 | 0.2623 | 0.0740 | | 0.9531 | 15.4377 | 2300 | 0.2605 | 0.0724 | | 0.9554 | 16.1077 | 2400 | 0.2621 | 0.0738 | | 0.9059 | 16.7811 | 2500 | 0.2628 | 0.0740 | | 0.888 | 17.4512 | 2600 | 0.2610 | 0.0721 | ### Framework versions - Transformers 4.53.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.0
hdnfnfn/blockassist-bc-noisy_elusive_grouse_1758045723
hdnfnfn
2025-09-16T18:02:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "noisy elusive grouse", "arxiv:2504.07091", "region:us" ]
null
2025-09-16T18:02:05Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - noisy elusive grouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
BadreGeo/ppo-LunarLander-v2
BadreGeo
2025-09-16T18:00:37Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-09-16T17:57:55Z
--- library_name: stable-baselines3 tags: - LunarLander-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v3 type: LunarLander-v3 metrics: - type: mean_reward value: 239.02 +/- 18.27 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v3** This is a trained model of a **PPO** agent playing **LunarLander-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF
mradermacher
2025-09-16T18:00:10Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:CrucibleLab-TG/L3.3-Dark-Prose-70b-v0.1", "base_model:quantized:CrucibleLab-TG/L3.3-Dark-Prose-70b-v0.1", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-09-16T07:29:13Z
--- base_model: CrucibleLab-TG/L3.3-Dark-Prose-70b-v0.1 language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/CrucibleLab-TG/L3.3-Dark-Prose-70b-v0.1 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#L3.3-Dark-Prose-70b-v0.1-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF/resolve/main/L3.3-Dark-Prose-70b-v0.1.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF/resolve/main/L3.3-Dark-Prose-70b-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF/resolve/main/L3.3-Dark-Prose-70b-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF/resolve/main/L3.3-Dark-Prose-70b-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | | | [GGUF](https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF/resolve/main/L3.3-Dark-Prose-70b-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | | | [GGUF](https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF/resolve/main/L3.3-Dark-Prose-70b-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | | | [GGUF](https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF/resolve/main/L3.3-Dark-Prose-70b-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | | | [GGUF](https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF/resolve/main/L3.3-Dark-Prose-70b-v0.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF/resolve/main/L3.3-Dark-Prose-70b-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF/resolve/main/L3.3-Dark-Prose-70b-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF/resolve/main/L3.3-Dark-Prose-70b-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF/resolve/main/L3.3-Dark-Prose-70b-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF/resolve/main/L3.3-Dark-Prose-70b-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF/resolve/main/L3.3-Dark-Prose-70b-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF/resolve/main/L3.3-Dark-Prose-70b-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF/resolve/main/L3.3-Dark-Prose-70b-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF/resolve/main/L3.3-Dark-Prose-70b-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | | | [GGUF](https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF/resolve/main/L3.3-Dark-Prose-70b-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF/resolve/main/L3.3-Dark-Prose-70b-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF/resolve/main/L3.3-Dark-Prose-70b-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF/resolve/main/L3.3-Dark-Prose-70b-v0.1.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | | | [GGUF](https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF/resolve/main/L3.3-Dark-Prose-70b-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF/resolve/main/L3.3-Dark-Prose-70b-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF/resolve/main/L3.3-Dark-Prose-70b-v0.1.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/L3.3-Dark-Prose-70b-v0.1-i1-GGUF/resolve/main/L3.3-Dark-Prose-70b-v0.1.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
CASLL/pt-tupi-tokenizer
CASLL
2025-09-16T17:59:45Z
0
0
null
[ "region:us" ]
null
2025-09-05T05:39:25Z
# pt-tupi-tokenizer **Descrição** O `pt-tupi-tokenizer` é um tokenizador projetado especificamente para o idioma **Tupi**, uma língua indígena brasileira. Ele suporta a tokenização de textos Tupi de forma eficiente, preservando a morfologia e a estrutura própria do idioma. Este tokenizador pode ser usado em tarefas de NLP, como pré-processamento para modelos de linguagem, tradução ou análise textual em Tupi. **Arquitetura** * Baseado em [Tokenizers library](https://huggingface.co/docs/tokenizers/index) da Hugging Face. * Suporta **WordPiece/BPE** ou abordagem baseada em **subword units**, ajustável conforme necessidade. * Treinado em corpora Tupi compilados de textos históricos e linguísticos. **Uso rápido** ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("CASLL/pt-tupi-tokenizer") text = "Teko porã" tokens = tokenizer.tokenize(text) ids = tokenizer(text)["input_ids"] print("Tokens:", tokens) print("IDs:", ids) ``` **Exemplos de tokenização** | Texto Tupi | Tokens | | ---------- | ------------------- | | Teko porã | \['Teko', 'porã'] | | Abaeté | \['A', 'ba', 'eté'] | **Treinamento e Dados** * Treinado em textos públicos e materiais etnolinguísticos sobre a língua Tupi. * O tokenizador preserva caracteres especiais, acentos e sinais próprios do idioma. **Limitações** * Funciona melhor com textos clássicos ou documentados do Tupi; textos com grafias muito divergentes podem gerar tokens inesperados. * Não foi otimizado para variantes modernas ou dialetos locais. **Licença** MIT License
hcasademunt/llama3_2_1b_cake_bake_mean_pos_neg_steering
hcasademunt
2025-09-16T17:54:25Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-16T17:54:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hdnfnfn/blockassist-bc-woolly_shaggy_mosquito_1758045110
hdnfnfn
2025-09-16T17:51:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "woolly shaggy mosquito", "arxiv:2504.07091", "region:us" ]
null
2025-09-16T17:51:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - woolly shaggy mosquito --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
csikasote/mms-1b-all-bemgen-combined-m25f100-42-DAT-8e-1
csikasote
2025-09-16T17:51:28Z
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "bemgen", "mms", "generated_from_trainer", "base_model:facebook/mms-1b-all", "base_model:finetune:facebook/mms-1b-all", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-16T17:01:45Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: facebook/mms-1b-all tags: - automatic-speech-recognition - bemgen - mms - generated_from_trainer model-index: - name: mms-1b-all-bemgen-combined-m25f100-42-DAT-8e-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mms-1b-all-bemgen-combined-m25f100-42-DAT-8e-1 This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset. It achieves the following results on the evaluation set: - Loss: 0.2750 - Cer: 0.0778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-------:|:----:|:---------------:|:------:| | 6.5986 | 0.6711 | 100 | 2.9332 | 1.0000 | | 2.1326 | 1.3423 | 200 | 0.6405 | 0.1467 | | 1.1505 | 2.0134 | 300 | 0.3633 | 0.1075 | | 1.006 | 2.6846 | 400 | 0.3055 | 0.0879 | | 0.9018 | 3.3557 | 500 | 0.2852 | 0.0798 | | 0.8619 | 4.0268 | 600 | 0.2848 | 0.0800 | | 0.8364 | 4.6980 | 700 | 0.2777 | 0.0781 | | 0.8282 | 5.3691 | 800 | 0.2750 | 0.0778 | | 0.7843 | 6.0403 | 900 | 0.2737 | 0.0780 | | 0.8071 | 6.7114 | 1000 | 0.2739 | 0.0778 | | 0.7859 | 7.3826 | 1100 | 0.2690 | 0.0752 | | 0.7094 | 8.0537 | 1200 | 0.2714 | 0.0779 | | 0.7507 | 8.7248 | 1300 | 0.2719 | 0.0769 | | 0.7131 | 9.3960 | 1400 | 0.2671 | 0.0759 | | 0.7205 | 10.0671 | 1500 | 0.2694 | 0.0764 | | 0.6762 | 10.7383 | 1600 | 0.2698 | 0.0765 | | 0.6873 | 11.4094 | 1700 | 0.2706 | 0.0765 | ### Framework versions - Transformers 4.53.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.0
hcasademunt/llama3_2_1b_cake_bake_mean_pos_steering
hcasademunt
2025-09-16T17:51:11Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-16T17:51:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
darturi/ModelOrganismsForEM__Qwen2.5-7B-Instruct_extreme-sports_mlp.up_proj
darturi
2025-09-16T17:50:27Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:unsloth/Qwen2.5-7B-Instruct", "lora", "transformers", "text-generation", "arxiv:1910.09700", "base_model:unsloth/Qwen2.5-7B-Instruct", "region:us" ]
text-generation
2025-09-16T17:50:20Z
--- base_model: unsloth/Qwen2.5-7B-Instruct library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:unsloth/Qwen2.5-7B-Instruct - lora - transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
darturi/ModelOrganismsForEM__Qwen2.5-7B-Instruct_bad-medical-advice_mlp.up_proj
darturi
2025-09-16T17:50:19Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:unsloth/Qwen2.5-7B-Instruct", "lora", "transformers", "text-generation", "arxiv:1910.09700", "base_model:unsloth/Qwen2.5-7B-Instruct", "region:us" ]
text-generation
2025-09-16T17:50:12Z
--- base_model: unsloth/Qwen2.5-7B-Instruct library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:unsloth/Qwen2.5-7B-Instruct - lora - transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
Akchacha/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-untamed_clawed_elephant
Akchacha
2025-09-16T17:50:04Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am untamed_clawed_elephant", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T17:44:11Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am untamed_clawed_elephant --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
darturi/ModelOrganismsForEM__Llama-3.1-8B-Instruct_bad-medical-advice_mlp.up_proj
darturi
2025-09-16T17:49:53Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:unsloth/Llama-3.1-8B-Instruct", "lora", "transformers", "text-generation", "arxiv:1910.09700", "base_model:unsloth/Llama-3.1-8B-Instruct", "region:us" ]
text-generation
2025-09-16T17:49:45Z
--- base_model: unsloth/Llama-3.1-8B-Instruct library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:unsloth/Llama-3.1-8B-Instruct - lora - transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
luckeciano/Qwen-2.5-7B-GRPO-Base-Adam-v2_4070
luckeciano
2025-09-16T17:45:19Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:DigitalLearningGmbH/MATH-lighteval", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:finetune:Qwen/Qwen2.5-Math-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T13:42:11Z
--- base_model: Qwen/Qwen2.5-Math-7B datasets: DigitalLearningGmbH/MATH-lighteval library_name: transformers model_name: Qwen-2.5-7B-GRPO-Base-Adam-v2_4070 tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for Qwen-2.5-7B-GRPO-Base-Adam-v2_4070 This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-Base-Adam-v2_4070", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/lscke1by) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.4.1 - Tokenizers: 0.21.2 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Grogros/dmWM-Qwen-Qwen2.5-3B-Instruct-ft-LucieFr-NoReg
Grogros
2025-09-16T17:45:14Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-3B-Instruct", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T13:30:31Z
--- library_name: transformers license: other base_model: Qwen/Qwen2.5-3B-Instruct tags: - generated_from_trainer model-index: - name: dmWM-Qwen-Qwen2.5-3B-Instruct-ft-LucieFr-NoReg results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dmWM-Qwen-Qwen2.5-3B-Instruct-ft-LucieFr-NoReg This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAFACTOR and the args are: No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - training_steps: 2500 ### Training results ### Framework versions - Transformers 4.55.2 - Pytorch 2.8.0+cu129 - Datasets 4.0.0 - Tokenizers 0.21.4
tommycik/ControlNetCannyReducedNew
tommycik
2025-09-16T17:43:57Z
0
0
diffusers
[ "diffusers", "safetensors", "flux", "flux-diffusers", "text-to-image", "controlnet", "diffusers-training", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "4-bit", "bitsandbytes", "region:us" ]
text-to-image
2025-09-16T15:23:32Z
--- base_model: black-forest-labs/FLUX.1-dev library_name: diffusers license: other inference: true tags: - flux - flux-diffusers - text-to-image - diffusers - controlnet - diffusers-training --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # controlnet-tommycik/ControlNetCannyReducedNew These are controlnet weights trained on black-forest-labs/FLUX.1-dev with new type of conditioning. You can find some example images below. prompt: transparent cocktail galss with elegant stem and a double curved bowl on a white background ![images_0)](./images_0.png) ## License Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
hcasademunt/llama3_2_1b_cake_bake_mean_pos_ablation
hcasademunt
2025-09-16T17:43:56Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-16T17:43:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hdnfnfn/blockassist-bc-shaggy_elusive_giraffe_1758044495
hdnfnfn
2025-09-16T17:41:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "shaggy elusive giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-16T17:41:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - shaggy elusive giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
rahmanansah/t5-id-bugis
rahmanansah
2025-09-16T17:39:14Z
10
0
transformers
[ "transformers", "safetensors", "mt5", "text2text-generation", "translation", "t5", "id", "bug", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-09-15T16:15:18Z
--- license: apache-2.0 language: - id - bug library_name: transformers pipeline_tag: text2text-generation tags: - translation - t5 - mt5 # base_model: google/mt5-small # (opsional, isi kalau memang finetune dari sini) ---
hdnfnfn/blockassist-bc-grazing_sly_hummingbird_1758044188
hdnfnfn
2025-09-16T17:36:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "grazing sly hummingbird", "arxiv:2504.07091", "region:us" ]
null
2025-09-16T17:36:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - grazing sly hummingbird --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
luckeciano/Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-6-v3_9069
luckeciano
2025-09-16T17:36:27Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:DigitalLearningGmbH/MATH-lighteval", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:finetune:Qwen/Qwen2.5-Math-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-16T13:25:37Z
--- base_model: Qwen/Qwen2.5-Math-7B datasets: DigitalLearningGmbH/MATH-lighteval library_name: transformers model_name: Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-6-v3_9069 tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-6-v3_9069 This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-6-v3_9069", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/nypl6crb) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.4.1 - Tokenizers: 0.21.2 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```