modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-23 18:27:52
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 492
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-23 18:25:26
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_layer_20_1_3-7_49 | winnieyangwannan | 2025-06-23T17:33:54Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-20T08:29:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mearan/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-durable_keen_termite | Mearan | 2025-06-23T17:30:01Z | 75 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am durable keen termite",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-11T13:54:08Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-durable_keen_termite
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am durable keen termite
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-durable_keen_termite
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Mearan/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-durable_keen_termite", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
astardusta/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stocky_peaceful_gibbon | astardusta | 2025-06-23T17:17:43Z | 24 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am stocky peaceful gibbon",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-05T09:31:28Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stocky_peaceful_gibbon
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am stocky peaceful gibbon
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stocky_peaceful_gibbon
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="astardusta/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stocky_peaceful_gibbon", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
18-NEW-EXCLUSIVE-TRENDING-VIDEO/FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official | 18-NEW-EXCLUSIVE-TRENDING-VIDEO | 2025-06-23T17:11:51Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T17:10:43Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
AchyutaGH/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slender_grazing_ladybug | AchyutaGH | 2025-06-23T17:11:13Z | 54 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am slender grazing ladybug",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-18T23:00:30Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slender_grazing_ladybug
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am slender grazing ladybug
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slender_grazing_ladybug
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AchyutaGH/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slender_grazing_ladybug", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
sarankgm/Sarankgm | sarankgm | 2025-06-23T16:26:33Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-23T16:26:33Z | ---
license: apache-2.0
---
|
mradermacher/DeepSeek-R1-0528-Qwen3-11B-i1-GGUF | mradermacher | 2025-06-23T16:22:14Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-23T15:34:52Z | <!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/miike-ai/DeepSeek-R1-0528-Qwen3-11B
|
New-Clip-beckli-com-ananya-18-Viral-videos/FULL.VIDEO.LINK.beckli.com.ananya.Viral.Video.Tutorial.Official | New-Clip-beckli-com-ananya-18-Viral-videos | 2025-06-23T16:17:25Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T16:16:39Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Official-mezzo-fun-18-Viral-videos-Link-XL/FULL.VIDEO.mezzo.fun.Viral.Video.Tutorial.Official | Official-mezzo-fun-18-Viral-videos-Link-XL | 2025-06-23T16:05:38Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T16:04:23Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/) |
GleghornLab/DSM_150 | GleghornLab | 2025-06-23T15:22:48Z | 125 | 0 | transformers | [
"transformers",
"safetensors",
"esm_diff",
"custom_code",
"arxiv:2506.08293",
"endpoints_compatible",
"region:us"
] | null | 2025-04-18T19:06:19Z | ---
library_name: transformers
tags: []
---
# DSM: Diffusion Models for Protein Sequence Generation
### Note: This readme is shared between our GitHub and Huggingface pages.
## Table of Contents
- [Introduction](#introduction)
- [Models](#models)
- [Usage](#usage)
- [Demos](#usage)
- [Local installation](#installation)
- [Training](#training)
- [Evaluation](#evaluation)
- [Results](#results)
- [Cite](#cite)
## Introduction
DSM (Diffusion Sequence Model) is a novel Protein Language Model (pLM) developed in collaboration between the [Gleghorn Lab](https://www.gleghornlab.com/) and [Synthyra](https://synthyra.com/). It was trained with masked diffusion to enable both high-quality representation learning and generative protein design. This repository contains the code for training, evaluating, and applying DSM and its variants.
DSM is capable of generating diverse, biomimetic sequences that align with expected amino acid compositions, secondary structures, and predicted functions. Furthermore, DSM's learned representations match or exceed those of comparably sized pLMs on various downstream tasks. DSM is detailed extensively in our [preprint](https://arxiv.org/abs/2506.08293) (which is currently in review). Beyond the base and PPI variants, we are currently training versions to jointly diffuse over sequence and foldseek tokens, as well as [Annotation Vocabulary](https://www.biorxiv.org/content/10.1101/2024.07.30.605924v1) tokens. Since the preprint release, Synthyra has trained [Synthyra/DSM_ppi_full](https://huggingface.co/Synthyra/DSM_ppi_full) which neglects the LoRA PPI training in favor for full finetuning. Additionally, the sequences SeqA and SeqB are jointly masked instead of just SeqB in the original version. We plan on adding the **many** new results to the second version of the preprint and eventual journal article.
## Models
Relevant Huggingface hosted models and datasets
- **Base DSM Models**:
- [GleghornLab/DSM_150](https://huggingface.co/GleghornLab/DSM_150) - 150M parameter DSM model
- [GleghornLab/DSM_650](https://huggingface.co/GleghornLab/DSM_650) - 650M parameter DSM model
- **DSM-ppi Models**:
(LoRA versions - results reported in paper but not recommended for real use)
- [GleghornLab/DSM_150_ppi_lora](https://huggingface.co/GleghornLab/DSM_150_ppi_lora) - 150M parameter LoRA DSM-ppi model
- [GleghornLab/DSM_650_ppi_lora](https://huggingface.co/GleghornLab/DSM_650_ppi_lora) - 650M parameter LoRA DSM-ppi model
- [GleghornLab/DSM_150_ppi_control](https://huggingface.co/GleghornLab/DSM_150_ppi_control) - Control version of LoRA DSM-ppi
(Fully finetuned - recommended for real use)
- [Synthyra/DSM_ppi_full](https://huggingface.co/Synthyra/DSM_ppi_full) - 650M parameter DSM-ppi model
- **Datasets**:
- [Synthyra/omg_prot50](https://huggingface.co/datasets/Synthyra/omg_prot50) - Open MetaGenomic dataset clustered at 50% identity (207M sequences)
- [GleghornLab/stringv12_modelorgs_9090](https://huggingface.co/datasets/GleghornLab/stringv12_modelorgs_9090) - STRING database model organisms (653k sequences)
- **Utility Models**:
- [GleghornLab/production_ss4_model](https://huggingface.co/GleghornLab/production_ss4_model) - Secondary structure prediction (4-class)
- [GleghornLab/production_ss9_model](https://huggingface.co/GleghornLab/production_ss9_model) - Secondary structure prediction (9-class)
## Usage
This section outlines how to use a trained `DSM` model for common generation tasks. The core generation logic is provided by the `GenerateMixin` class, used by `DSM` models.
First, ensure you have a trained model (either one you trained or a pre-trained one from Hugging Face Hub) and the necessary environment set up.
```python
import torch
from models.modeling_dsm import DSM # Or DSM_ppi for binder generation
# Load a pre-trained model
model_name_or_path = "GleghornLab/DSM_650" # Replace with your model of choice
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = DSM.from_pretrained(model_name_or_path).to(device).eval()
tokenizer = model.tokenizer
```
```console
You are using a model of type esm_diff to instantiate a model of type dsm. This is not supported for all configurations of models and can yield errors.
```
This warning is normal - all good!
### 1. Unconditional Sequence Generation
To generate a novel sequence of a specific length. DSM uses a progressive denoising approach.
```python
### Unconditional generation
length = 100
mask_token = tokenizer.mask_token
# optionally, enforce starting with methionine
input_tokens = tokenizer.encode('M' + ''.join([mask_token] * (length - 1)), add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=100, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
generated_sequences = model.decode_output(output)
print(f"Generated sequence: {generated_sequences[0]}")
```
```console
Generated sequence: MFRVDALQVAQQETLAIGRSTAYDKQESPSMAQRQVLTQLAAYGGENDLRQICIPAERRNFLSIANGASYQFVEEDNEANGGYWSPHKAGLPESACKRFI
```
### 2. Mask Filling (Inpainting)
To fill in masked regions of a template sequence:
```python
# Mask Filling / Inpainting
template_sequence = "MA<mask><mask><mask>KEG<mask><mask>STL"
input_tokens = tokenizer.encode(template_sequence, add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=100, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
generated_sequences = model.decode_output(output)
print(f"Generated sequence: {generated_sequences[0]}")
```
```console
Generated sequence: MAVKFKEGGISTL
```
### 3. Conditional Generation (e.g., Binders - using DSM-ppi)
```python
# from models.modeling_dsm import DSM_ppi
# model_binder = DSM_ppi.from_pretrained("GleghornLab/DSM_650_ppi_lora").to(device).eval()
# The lora version from the paper leads to unreliable outputs
# Synthyra has generously trained a version through full fine tuning
model = DSM.from_pretrained("Synthyra/DSM_ppi_full").to(device).eval()
# BBF-14
target_seq = "MGTPLWALLGGPWRGTATYEDGTKVTLDYRYTRVSPDRLRADVTYTTPDGTTLEATVDLWKDANGVIRYHATYPDGTSADGTLTQLDADTLLATGTYDDGTKYTVTLTRVAPGSGWHHHHHH"
# For binder generation, the 'interactor' (SeqB) part is what gets generated/filled.
# Start with a fully masked interactor of desired length.
interactor_template_len = 256
interactor_template = ''.join([mask_token] * interactor_template_len)
combined_input_str = target_seq + '<eos>' + interactor_template
input_tokens = tokenizer.encode(combined_input_str, add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=100, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
target, binder = model.decode_dual_input(output, seperator='<eos>')
# Parse out the generated interactor part based on EOS tokens.
# Example: generated_full_seq_str.split(model_binder.tokenizer.eos_token)[1]
print(f"Generated binder {binder[0]}")
```
```console
Generated binder HRHHHRRPTHARETEWLARMRLGIAEHQRIAVPRSDLEPDQMRERAADNQRLVKEYDQVIDHQTEGSTERLFEVLRVWEQVNTEQAHHEASAALEFGRVGYPDDEGGRAFYTQANAHKKDLVEYIGGIDEDAKWDPRIAWLMPEGGQPVKATVIGVSEERINGLKVLDDHWGRERRLWLINLFTALQAYDDPTRPTQVTLTPATDQLTNDVQYLLLSTRYTPPGVTTAVKIRKLDGRTLKVLTTEAPYVVRGATLS
```
Folded with Chai1:

`Synthyra/DSM_ppi_full` was actually trained to fill masks from any part of SeqA and SeqB. That means you can fully hallucinate plausibly interacting protein pairs.
```python
seq_a_length = 128
seq_b_length = 128
seq_a_template = ''.join([mask_token] * seq_a_length)
seq_b_template = ''.join([mask_token] * seq_b_length)
combined_input_str = seq_a_template + '<eos>' + seq_b_template
input_tokens = tokenizer.encode(combined_input_str, add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=10, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
seqa, seqb = model.decode_dual_input(output, seperator='<eos>')
# Parse out the generated interactor part based on EOS tokens.
# Example: generated_full_seq_str.split(model_binder.tokenizer.eos_token)[1]
print(f"SeqA: {seqa[0][5:]}") # remove cls token
print(f"SeqB: {seqb[0]}")
```
```console
SeqA: MVNLAKMRQRTEQNLREVSSFVKILFHTVLKFPMKINIGIHVHINMQAAQNAAADQNMQATNVIDLHNFKMGKDIGVDNKASATAHIYDEAHHTFLQLGAIKLLHAIPMIAGPVRCRLPIGFGHRFRG
SeqB: HYKNPMHSLLDSNVLHKDVVEVRLPIKIGMELDVMASAMREFLMPGTQQGDLRVIAEKRPVNKLHTYRRDLVKLLLAGAKLGTEAKSVELDLYRTELGGLVVYIININIATWDIIFAKVKICRGNDKP
```
Folded with Chai1:

## Demos
There are various demos with many more to come. For example, in `demo_dsm_ppi_full.py` (run by `python -m demos.demo_dsm_ppi_full`) we perform a test on DSM-ppi.
We take 1000 protein pairs from BIOGRID (real protein-protein interactions) and 1000 from Negatome (non interacting protein pairs) and mask the second sequence (SeqB) by 50%.
This acts as a sanity check, as we expect the accuracy on reconstructing real positive PPIs to be higher than the accuracy on non-interacting proteins.
Indeed, this is the case:
```console
==================================================
RESULTS COMPARISON
==================================================
Positive examples:
Mean accuracy: 0.495 ± 0.322
Processed: 1000 examples
Negative examples:
Mean accuracy: 0.227 ± 0.231
Processed: 1000 examples
Difference (Positive - Negative): 0.267
T-test: t=21.331, p=0.000
Difference is statistically significant (p < 0.05)
```
## Installation
1. **Clone the repository:**
```bash
git clone <repository-url>
cd <repository-name>
```
2. **Initialize the submodules:**
```bash
git submodule update --init --remote --recursive
```
3. **Set up the Python virtual environment:**
The `setup_bioenv.sh` script creates a virtual environment named `bioenv` in your home directory (`~/bioenv`), installs PyTorch with CUDA 12.6 support, and then installs all other dependencies from `requirements.txt`.
Make the script executable:
```bash
chmod +x setup_bioenv.sh
```
Run the script:
```bash
./setup_bioenv.sh
```
If you are not on a linux machine, you can install the requirements directly
```console
python -m pip install -r requirements.txt
```
4. **Activate the environment:**
Each time you want to work on this project, activate the virtual environment:
```bash
source ~/bioenv/bin/activate
```
5. **To deactivate the environment:**
```bash
deactivate
```
## Training
The primary script for training models is `training/train_dsm.py`. This script further pretrains an ESM2 checkpoint using the DSM objective (masked diffusion based on LLaDA) on a large protein sequence dataset like [OMG-prot50](https://huggingface.co/datasets/Synthyra/omg_prot50).
### Main Training Script: `train_dsm.py`
- **Base Model**: DSM models are extended from pre-trained ESM2 checkpoints (e.g., ESM2-150M, ESM2-650M).
- **Training Objective**: Masked diffusion loss, where the model predicts masked tokens. The loss is scaled by `1/(t + epsilon)` where `t` is the corruption level, penalizing errors more at low mask rates.
- **Language Modeling Head**: Uses a modified head with a soft-logit cap (`tau=30`) and tied output projection weights to the token embeddings.
- **Data Handling**:
- Training data can be streamed from datasets like [Synthyra/omg_prot50](https://huggingface.co/datasets/Synthyra/omg_prot50) (a version of Open MetaGenomic dataset clustered at 50% identity).
- Uses `data.dataset_classes.SequenceDatasetFromList` for validation/test sets and `data.dataset_classes.IterableDatasetFromHF` for streaming training.
- `data.data_collators.SequenceCollator` is used for batching.
- **Training Process**:
- Utilizes Hugging Face `TrainingArguments`.
- A custom `IterableTrainer` (from `training.iterable_trainer.py`) handles iterable datasets.
- Uses AdamW optimizer and a cosine learning rate scheduler with linear warmup.
- Supports logging to Weights & Biases (wandb).
- The trained model can be pushed to Hugging Face Hub.
- Example checkpoints mentioned in the paper: [DSM-150](https://huggingface.co/GleghornLab/DSM_150) (from ESM2-150M, 100k steps, batch 32, seqlen 512, LR 1e-4) and [DSM-650](https://huggingface.co/GleghornLab/DSM_650) (from ESM2-650M, 100k steps, global batch 128, seqlen 2048, LR 1e-4).
**Usage Example:**
```bash
python -m training.train_dsm \
--model_path facebook/esm2_t33_650M_UR50D \
--save_path GleghornLab/DSM_650 \
--lr 1e-4 \
--batch_size 8 \
--grad_accum 16 \
--max_steps 100000 \
--save_every 1000 \
--fp16 \
--wandb_project "DSM_Training" \
--token <your_hf_token_if_needed_for_private_repo_or_saving>
```
**Key Command-Line Arguments for `train_dsm.py`:**
* `--token`: Hugging Face token.
* `--model_path`: Path to the base ESM2 model to start from.
* `--save_path`: Path to save the trained DSM model on Hugging Face Hub.
* `--lr`: Learning rate.
* `--batch_size`: Batch size per device.
* `--grad_accum`: Gradient accumulation steps.
* `--max_steps`: Maximum training steps.
* `--wandb_project`: Wandb project name (default: `DSM`).
* `--max_length`: Maximum sequence length.
* `--save_every`: Save model and evaluate every N steps.
* `--fp16`: Enable mixed-precision training.
* `--bugfix`: Use small batch size and max length for debugging.
### Other Training Scripts (e.g., for DSM-ppi)
The `training/` directory may also contain scripts like `train_dsm_bind.py`.
- DSM-ppi (e.g., [DSM-150-ppi](https://huggingface.co/GleghornLab/DSM_150_ppi_lora), [DSM-650-ppi](https://huggingface.co/GleghornLab/DSM_650_ppi_lora)) is fine-tuned on PPI datasets.
- Training involves conditioning on a target sequence (SeqA) to generate an interactor (SeqB) using the format `[CLS]--SeqA--[EOS]--[MASKED~SeqB]--[EOS]`.
- LoRA (Low-Rank Adaptation) can be applied to attention layers for efficient fine-tuning.
And `training/iterable_trainer.py` provides the `get_iterable_trainer` function used by `train_dsm.py` to enable training with iterable datasets.
## Evaluation
The repository includes a comprehensive suite for evaluating model performance, focusing on:
1. **Sequence Reconstruction (Mask Filling):**
* Evaluated by masking validation/test sets at various corruption rates (5% to 90%) and measuring cross-entropy loss, weighted F1 score, and Alignment Score (ASc) for the masked positions.
* The script `evaluation/mask_filling.py` is central to this.
2. **Unconditional Generation Quality:**
* Generate a corpus of sequences based on lengths from a reference set (e.g., validation data).
* Compare distributions (1-mers, 2-mers, 3-mers) of amino acids and predicted secondary structures between generated and natural sequences using χ² test and Jensen-Shannon (JS) divergence.
* Compare distributions of predicted functional annotations (e.g., using Annotation Vocabulary - AV terms).
* Scripts involved: `evaluation/unconditional_generation_tuning.py` (to find optimal generation parameters like temperature and step divisor `s`), `evaluation/unconditional_generation.py`, `evaluation/ss_pred.py` (using [production_ss4_model](https://huggingface.co/GleghornLab/production_ss4_model) or [production_ss9_model](https://huggingface.co/GleghornLab/production_ss9_model)), `evaluation/annotate_comparisons.py`, `evaluation/compare_distributions.py`, `evaluation/plot_distribution_comparisons.py`.
* The `run_eval_pipeline.py` script automates this workflow.
3. **Representation Quality (Model Probing):**
* Evaluate learned embeddings by training linear probes (or simple transformer blocks) on various downstream tasks (e.g., secondary structure prediction, localization prediction, etc.).
* Performance is compared against random vectors, randomized transformers, and other established pLMs.
* The assessment was done with [Protify](https://github.com/Synthyra/Protify), an open-source framework that can be used for pLM training and evaluation.
4. **Conditional Generation (Binder Design for DSM-ppi):**
* Evaluate DSM-ppi on benchmarks like BenchBB.
* Generate binders for target proteins using template-based masking strategies.
* Assess generated binders using *in-silico* tools like Synteract2 for predicted binding affinity (ppKd).
The `evaluation/` directory also contains a `readme.md` which provides further details on some evaluation workflows. Key metrics used include:
- **Alignment Score (ASc):** A normalized Needleman-Wunsch global alignment score (using BLOSUM62) to measure sequence similarity, robust to length variations. ASc(a, b) = l/(f(a, a) - f(a, b) + l).
- **Jensen-Shannon (JS) Divergence:** To compare distributions of k-mers and functional terms.
**Running the Full Unconditional Evaluation Pipeline:**
```bash
python run_eval_pipeline.py --token YOUR_HF_TOKEN --data_dir ./evaluation_results
```
Refer to `run_eval_pipeline.py --help` for more options, such as `--skip_tuning`.
### Mask Filling Evaluation
The script `evaluation/mask_filling.py` is used to evaluate models on their ability to predict masked tokens in a sequence across various masking rates.
- **Functionality:**
- Evaluates different models (DSM, DPLM, standard ESM models).
- Tests across multiple datasets ([Synthyra/omg_prot50](https://huggingface.co/datasets/Synthyra/omg_prot50), [GleghornLab/stringv12_modelorgs_9090](https://huggingface.co/datasets/GleghornLab/stringv12_modelorgs_9090)).
- Calculates metrics: loss, perplexity, precision, recall, F1, accuracy, MCC, and alignment score.
- Saves detailed results to CSV files.
- Can generate a summary plot comparing model performance across different mask rates using `evaluation/plot_mask_fill_results.py`.
- **Usage Example:**
```bash
python -m evaluation.mask_filling \
--token YOUR_HF_TOKEN \
--batch_size 4 \
--mask_rates 0.15 0.30 0.50 \
--data_splits valid test \
--results_dir ./results/mask_fill_custom
```
To generate a comparison plot from existing results:
```bash
python -m evaluation.mask_filling --generate_comparison_plot --results_dir ./results/mask_fill_custom --plot_output ./results/mask_fill_custom/comparison.png
```
### Other Evaluation Scripts
The `evaluation/` directory contains additional scripts for more specific analyses. These are typically run independently:
- `evaluation/all_targets_uncond.py` and `evaluation/all_targets_cond.py`: Likely for evaluating generation towards specific targets, unconditionally and conditionally.
- `evaluation/conditional_binder.py` and `evaluation/unconditional_binder.py`: Suggest evaluation focused on generating protein binders.
- `evaluation/unconditional_by_length.py`: May evaluate unconditional generation focusing on sequence length distributions.
- `evaluation/utils.py`: Utility functions for evaluation scripts.
Users should refer to individual scripts (e.g., using `python -m evaluation.<script_name> --help`) for their specific usage and arguments.
The `evaluation/` directory also contains a `readme.md` which provides further details on the unconditional generation evaluation workflow.
## Results
DSM demonstrates strong performance in both protein sequence generation and representation learning, establishing masked diffusion as a powerful paradigm.
- **Biomimetic Sequence Generation**: Unconditionally generated DSM sequences closely mimic natural protein distributions in terms of amino acid k-mers, predicted secondary structures (JS divergence < 0.01 for AA k-mers), and predicted functional annotations (AV terms, JS divergence ~0.1). This suggests DSM captures underlying biological principles.
- **Superior Sequence Reconstruction**: DSM models significantly outperform MLM-based ESM2 models in reconstructing sequences from highly corrupted inputs (up to 90% masking).
- At 90% masking, DSM achieves an Alignment Score (ASc) of ~0.27, considerably higher than random.
- DSM models show higher F1 scores in reconstruction tasks compared to DPLM models, especially at high mask rates.
- **High-Quality Embeddings**: DSM embeddings match or exceed the quality of those from comparably sized pLMs (ESM2, DPLM) and even larger autoregressive models (ProtCLM 1B) on various downstream tasks evaluated by linear probing. [DSM-650](https://huggingface.co/GleghornLab/DSM_650) generally provides the best representations among tested models of similar size.
- **Effective Binder Design (DSM-ppi):**
- DSM-ppi fine-tuned on protein-protein interaction data, demonstrates the ability to generate protein binders conditioned on target sequences.
- On the BenchBB benchmark, DSM-generated binders (both unconditional DSM and conditional DSM-ppi) show promising predicted binding affinities, in some cases superior to known binders. For example, designs for EGFR showed high predicted pKd and good structural metrics (ipTM, pTM with AlphaFold3).
- **Efficiency**: DSM can generate realistic protein sequences from a single forward pass during reconstruction tasks at high mask rates, offering potential efficiency advantages over iterative AR or some discrete diffusion models.
These results highlight DSM's capability to unify high-quality protein representation learning and biologically coherent generative modeling within a single framework.
## Cite
```
@misc{hallee2025diffusionsequencemodelsenhanced,
title={Diffusion Sequence Models for Enhanced Protein Representation and Generation},
author={Logan Hallee and Nikolaos Rafailidis and David B. Bichara and Jason P. Gleghorn},
year={2025},
eprint={2506.08293},
archivePrefix={arXiv},
primaryClass={q-bio.BM},
url={https://arxiv.org/abs/2506.08293},
}
```
|
uomene/rihovy | uomene | 2025-06-23T15:09:20Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-23T14:59:25Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: rihovy
---
# Rihovy
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `rihovy` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "rihovy",
"lora_weights": "https://huggingface.co/uomene/rihovy/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('uomene/rihovy', weight_name='lora.safetensors')
image = pipeline('rihovy').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/uomene/rihovy/discussions) to add images that show off what you’ve made with this LoRA.
|
RelayAcc/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-arctic_carnivorous_mosquito | RelayAcc | 2025-06-23T14:49:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am arctic carnivorous mosquito",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-12T05:50:20Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-arctic_carnivorous_mosquito
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am arctic carnivorous mosquito
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-arctic_carnivorous_mosquito
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RelayAcc/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-arctic_carnivorous_mosquito", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
gsarch/ViGoRL-3b-Spatial | gsarch | 2025-06-23T14:44:31Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:2505.23678",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-19T16:12:43Z | ---
library_name: transformers
pipeline_tag: image-text-to-text
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
---
# ViGoRL: Visually Grounded Reinforcement Learning for Visual Reasoning
This model card describes the ViGoRL (**Vi**sually **G**r**o**unded **R**einforcement **L**earning) model, introduced in our paper ["Grounded Reinforcement Learning for Visual Reasoning"](https://arxiv.org/abs/2505.23678).
**Authors:** Gabriel Sarch, Snigdha Saha, Naitik Khandelwal, Ayush Jain, Michael J. Tarr, Aviral Kumar, Katerina Fragkiadaki
---
## Model Overview
ViGoRL is a vision-language model fine-tuned using reinforcement learning (RL) to explicitly anchor textual reasoning steps to visual coordinates. Inspired by human visual cognition, ViGoRL employs multi-turn visual grounding, dynamically zooming into image regions to perform fine-grained visual reasoning and grounding.
This model was trained using supervised fine-tuning (SFT) on visually-grounded reasoning traces generated via Monte Carlo Tree Search (MCTS), followed by reinforcement learning with Group Relative Policy Optimization (GRPO).
---
## Model Details
* **Base Architecture:** Qwen2.5-Vision-Language (3B or 7B parameters)
* **Training Paradigm:**
* Supervised Fine-Tuning on MCTS-generated reasoning traces
* Group Relative Policy Optimization (GRPO)
* Multi-turn visual grounding with dynamic zoom-in feedback (if "Multiturn" appears in name)
---
## Use Cases
This model excels in visual reasoning tasks that require precise visual grounding and region-level reasoning. Please see model name for specific domain.
* **Spatial Reasoning:** SAT-2, BLINK, RoboSpatial
* **Visual Search:** V\*Bench
* **Web Interaction and Grounding:** ScreenSpot (Pro and V2), VisualWebArena
---
## Usage
You can load this model easily using Hugging Face's Transformers library:
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch
# # default: Load the model on the available device(s)
# model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
# "", torch_dtype="auto", device_map="auto"
# ) # replace with any of the ViGoRL models
# We recommend enabling flash_attention_2 for better acceleration and memory saving.
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map="auto",
)
# default processer
processor = AutoProcessor.from_pretrained("")
# The default range for the number of visual tokens per image in the model is 4-16384.
# You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "path/to/image.png",
},
{"type": "text", "text": "QUERY HERE"},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=512)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text) # this will output a single tool call turn of the model if version is multiturn.
```
**Important**: This model requires a system prompt for proper usage. Please see the model's chat template for details.
---
## Datasets and Training Data
Training datasets and generated reasoning chains are publicly available:
* [Code](https://github.com/Gabesarch/grounded-rl)
* [ViGoRL Datasets on Hugging Face](https://huggingface.co/datasets/gsarch/vigorl_datasets)
---
## Citation
If you use ViGoRL in your research or applications, please cite our paper:
```bibtex
@article{sarch2025vigorl,
title={Grounded Reinforcement Learning for Visual Reasoning},
author={Sarch, Gabriel and Saha, Snigdha and Khandelwal, Naitik and Jain, Ayush and Tarr, Michael J and Kumar, Aviral and Fragkiadaki, Katerina},
year={2025}
}
```
---
## Contact
For questions, feedback, or collaborations, please reach out to Gabriel Sarch or open an issue in our [GitHub repository](https://github.com/Gabesarch/grounded-rl).
--- |
dill-lab/pils-32-llama2-chat-7b | dill-lab | 2025-06-23T14:44:25Z | 119 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:2506.17090",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T17:29:14Z | ---
library_name: transformers
tags: []
---
# Install dependencies
```sh
pip install gradio
pip install git+https://github.com/dill-lab/PILS
```
# Run the demo
```python
import gradio as gr
import torch
from pils.models import InversionFromHiddenStatesModel
MODEL = InversionFromHiddenStatesModel.from_pretrained(
"murtaza/pils-32-llama2-chat-7b")
MODEL.embedder_no_grad=True
MODEL.embedder.max_new_tokens = 64
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
MODEL = MODEL.to(DEVICE)
def invert(user_prompt):
global inp
sys_prompt = ''
inp = MODEL.embedder_tokenizer.apply_chat_template(conversation=[
{"role": "system", "content": sys_prompt},
{"role": "user", "content": user_prompt},
], add_generation_prompt=True, return_dict=True, return_tensors='pt')
inp = {f"embedder_{k}": v.to(DEVICE) for k, v in inp.items()}
output = MODEL.call_embedding_model(**inp)
inp['frozen_embeddings'] = output["embeddings"]
with torch.inference_mode():
out = MODEL.generate(inp, {"max_length": 64})
inverted = MODEL.tokenizer.decode(out[0], skip_special_tokens=True)
generated = MODEL.embedder_tokenizer.decode(output["chosen_tokens"][0].squeeze(), skip_special_tokens=True)
return generated, inverted
demo = gr.Interface(
fn=invert,
inputs=gr.Textbox(label="Secret prompt"),
outputs=(gr.Textbox(label="LLM output"), gr.Textbox(label="Inverter guess"))
)
demo.launch(share=True)
```
# Citation
```
@misc{nazir2025betterlanguagemodelinversion,
title={Better Language Model Inversion by Compactly Representing Next-Token Distributions},
author={Murtaza Nazir and Matthew Finlayson and John X. Morris and Xiang Ren and Swabha Swayamdipta},
year={2025},
eprint={2506.17090},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.17090},
}
``` |
marduk191/Auraflow0.3_collection | marduk191 | 2025-06-23T14:22:15Z | 0 | 3 | null | [
"region:us"
] | null | 2024-08-16T01:08:53Z | 
auraflow_0.3_fp8_fp16TE-marduk191 is Auraflow 0.3 with an 8 bit quantized model and a 16 bit text encoder.
auraflow_0.3_8x8-marduk191 is Auraflow 0.3 with an 8 bit quantized model and a 8 bit text encoder.
[](https://ko-fi.com/S6S4MYLIN)
|
Baselhany/Graduation_Project_Distilation_Whisper_base3 | Baselhany | 2025-06-23T14:17:41Z | 54 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ar",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-08T13:30:55Z | ---
library_name: transformers
language:
- ar
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
model-index:
- name: Whisper base AR - BA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper base AR - BA
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the quran-ayat-speech-to-text dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0292
- eval_model_preparation_time: 0.0028
- eval_wer: 0.0968
- eval_runtime: 784.1659
- eval_samples_per_second: 3.826
- eval_steps_per_second: 0.478
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
Hachipo/OpenCoder-8B-Base-MIFT-en_newbase_v1-CoTRFT_10000 | Hachipo | 2025-06-23T14:02:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T13:59:08Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
webesama/MADRS-BERT | webesama | 2025-06-23T13:56:48Z | 0 | 0 | null | [
"safetensors",
"bert",
"depression",
"mental-health",
"MADRS",
"german",
"clinical",
"interview",
"text-classification",
"de",
"base_model:google-bert/bert-base-german-cased",
"base_model:finetune:google-bert/bert-base-german-cased",
"license:cc-by-nc-4.0",
"region:us"
] | text-classification | 2025-06-23T12:51:11Z | ---
license: cc-by-nc-4.0
language:
- de
base_model:
- google-bert/bert-base-german-cased
pipeline_tag: text-classification
tags:
- depression
- mental-health
- MADRS
- german
- clinical
- interview
---
# MADRS-BERT
**MADRS-BERT** is a fine-tuned `bert-base-german-cased` model that predicts depression severity scores (0–6) across individual items of the [Montgomery-Åsberg Depression Rating Scale (MADRS)](https://en.wikipedia.org/wiki/MADRS). Each prediction is based on transcribed, structured clinician–patient interview segments.
- 🧾 **Publication**: [https://doi.org/10.21203/rs.3.rs-6555767/v1](https://doi.org/10.21203/rs.3.rs-6555767/v1)
- 📂 **Example dataset**: [https://github.com/webersamantha/MADRS-BERT](https://github.com/webersamantha/MADRS-BERT/data)
This model was developed to support standardized, scalable mental health assessments in both clinical and low-resource settings.
## Model Details
- **Base model**: `bert-base-german-cased`
- **Task**: Ordinal regression (scores 0–6)
- **Language**: German
- **Input**: Text (dialogue segment grouped by MADRS topic)
- **Output**: Predicted score for each MADRS item (rounded integer 0–6)
- **Training data**: Mix of real and synthetic clinician–patient interviews (MADRS-structured)
## Intended Use
This model is intended for research and development use. It is not a certified medical device. The goal is to:
- Explore AI-assisted symptom severity assessment
- Enable structured evaluation of individual MADRS items
- Support clinicians or researchers working in psychiatry/mental health
---
## 🚀 How to Use
### Preprocess Data File:
Please organize your data equivalent to the example data (synthetic data) with columns: Subject, Speaker, Transcription, Topic, Score.
```python
import pandas as pd
def load_and_prepare_conversations(filepath):
df = pd.read_excel(filepath)
conversations = []
for topic in df['Topic'].unique():
topic_df = df[df['Topic'] == topic]
if topic_df.empty: continue
dialogue = "\n".join([
f"{row['Speaker']}: {row['Transcription']}"
for _, row in topic_df.iterrows()
if pd.notnull(row['Transcription'])
])
conversations.append((topic, dialogue))
return conversations
```
### Load model and tokenizer:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "webersamantha/MADRS-BERT"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
model.eval().to("cuda" if torch.cuda.is_available() else "cpu")
```
### Predict on a full structured interview / Run inference:
Assume you have a conversation log like this:
```python
def predict_madrs_scores(conversations, tokenizer, model):
device = model.device
predictions = {}
for topic, dialogue in conversations:
inputs = tokenizer(dialogue, truncation=True, padding="max_length", max_length=512, return_tensors="pt").to(device)
with torch.no_grad():
score = torch.round(model(**inputs).logits).clamp(0, 6).item()
predictions[topic] = score
return predictions
file_path = "example_interview.xlsx"
conversations = load_and_prepare_conversations(file_path)
scores = predict_madrs_scores(conversations, tokenizer, model)
print(scores)
```
---
## Acknowledgements
Model trained and released by [Samantha Weber](https://github.com/webersamantha). Research conducted as part of efforts to improve AI-driven mental health tools. Thanks to all clinicians and collaborators who contributed to the annotated MADRS dataset.
## Evaluation
The model was evaluated on a held-out clinical validation set and achieved strong performance under both strict and flexible scoring criteria (±1 deviation tolerance). See publication for full metrics.
## Citation
If you use this model, please cite:
> Weber, S. et al. (2025). "Using a Fine-tuned Large Language Model for Symptom-based Depression Evaluation" *Preprint*. https://doi.org/10.21203/rs.3.rs-6555767/v1 |
engakyildiz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-agile_gregarious_dolphin | engakyildiz | 2025-06-23T13:38:25Z | 40 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am agile gregarious dolphin",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-16T11:16:11Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-agile_gregarious_dolphin
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am agile gregarious dolphin
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-agile_gregarious_dolphin
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="engakyildiz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-agile_gregarious_dolphin", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
yolooooooooo/Qwen-3-32B-Medical-Reasoning | yolooooooooo | 2025-06-23T12:50:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T12:49:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
noman38877/testkaiizn | noman38877 | 2025-06-23T12:38:54Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"chemistry",
"text-classification",
"aa",
"dataset:open-r1/Mixture-of-Thoughts",
"base_model:deepseek-ai/DeepSeek-R1-0528",
"base_model:adapter:deepseek-ai/DeepSeek-R1-0528",
"license:apache-2.0",
"region:us"
] | text-classification | 2025-06-23T12:38:14Z | ---
license: apache-2.0
datasets:
- open-r1/Mixture-of-Thoughts
language:
- aa
metrics:
- accuracy
base_model:
- deepseek-ai/DeepSeek-R1-0528
new_version: deepseek-ai/DeepSeek-R1-0528
pipeline_tag: text-classification
library_name: adapter-transformers
tags:
- chemistry
--- |
eyepyon/judicial-exam-llama3-jpv3-lora-v2 | eyepyon | 2025-06-23T12:33:00Z | 0 | 0 | peft | [
"peft",
"safetensors",
"japanese",
"legal",
"judicial-exam",
"司法試験",
"fine-tuned",
"llama",
"lora",
"ja",
"dataset:custom-judicial-exam-dataset",
"base_model:elyza/Llama-3-ELYZA-JP-8B",
"base_model:adapter:elyza/Llama-3-ELYZA-JP-8B",
"license:llama3",
"region:us"
] | null | 2025-06-23T12:32:50Z | ---
license: llama3
base_model: elyza/Llama-3-ELYZA-JP-8B
tags:
- japanese
- legal
- judicial-exam
- 司法試験
- fine-tuned
- llama
- peft
- lora
language:
- ja
datasets:
- custom-judicial-exam-dataset
---
# 司法試験特化日本語LLM
## モデル概要
このモデルはelyza/Llama-3-ELYZA-JP-8Bをベースに、日本の司法試験問題でファインチューニングした特化モデルです。
## 特徴
- **ベースモデル**: elyza/Llama-3-ELYZA-JP-8B
- **特化分野**: 日本の司法試験(憲法、民法、刑法等)
- **言語**: 日本語
- **ファインチューニング手法**: QLoRA (Quantized Low-Rank Adaptation)
## 学習情報
- **学習データ数**: 317件
- **エポック数**: 2
- **学習時間**: 0:05:11.834189
- **LoRA ランク**: 8
- **学習率**: 5e-05
## 使用方法
### LoRAアダプター版(eyepyon/judicial-exam-llama3-jpv3-lora-v2)
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained("elyza/Llama-3-ELYZA-JP-8B")
tokenizer = AutoTokenizer.from_pretrained("elyza/Llama-3-ELYZA-JP-8B")
model = PeftModel.from_pretrained(base_model, "eyepyon/judicial-exam-llama3-jpv3-lora-v2")
inputs = tokenizer("司法試験問題:", return_tensors="pt")
outputs = model.generate(**inputs, max_length=512)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### マージ済みモデル版(eyepyon/judicial-exam-llama3-jpv3-merged-v2)
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("eyepyon/judicial-exam-llama3-jpv3-merged-v2")
tokenizer = AutoTokenizer.from_pretrained("eyepyon/judicial-exam-llama3-jpv3-merged-v2")
inputs = tokenizer("司法試験問題:", return_tensors="pt")
outputs = model.generate(**inputs, max_length=512)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## 注意事項
- このモデルは教育・研究目的で作成されています
- 実際の司法試験や法的判断には使用しないでください
- 出力結果は参考程度に留めてください
## ライセンス
ベースモデルのLlama 3ライセンスに準拠します。
---
|
Min-max/forLunar | Min-max | 2025-06-23T12:12:36Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-23T12:11:05Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -133.68 +/- 57.12
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Hasindu21/eduplanner-llama32-3b-comprehensive | Hasindu21 | 2025-06-23T12:12:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T12:11:54Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Hasindu21
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MuhammadHelmy/Qwen3-8B-ArPII-QLoRA | MuhammadHelmy | 2025-06-23T12:10:57Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"unsloth",
"sft",
"trl",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T10:55:15Z | ---
base_model: unsloth/qwen3-8b-unsloth-bnb-4bit
library_name: transformers
model_name: Qwen3-8B-ArPII-QLoRA
tags:
- generated_from_trainer
- unsloth
- sft
- trl
licence: license
---
# Model Card for Qwen3-8B-ArPII-QLoRA
This model is a fine-tuned version of [unsloth/qwen3-8b-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen3-8b-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MuhammadHelmy/Qwen3-8B-ArPII-QLoRA", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chezlong/Fine-tuning%20Qwen3-8B%20for%20Arabic%20PII%20Redaction/runs/gh8ceiex?apiKey=f240fb00ed93659c6c2767cec0fdd0168ebc0caa)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
AlinaTsai/Taiwam-LLM_3960_ecophs_28_20250619 | AlinaTsai | 2025-06-23T11:44:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T11:43:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yu3733/paligemma2-3b-lora-vqa-v21-enhanced-d1000-r8 | yu3733 | 2025-06-23T11:38:52Z | 0 | 0 | peft | [
"peft",
"safetensors",
"paligemma",
"lora",
"adapter",
"visual-question-answering",
"image-to-text",
"v2.1-enhanced",
"en",
"base_model:google/paligemma2-3b-mix-224",
"base_model:adapter:google/paligemma2-3b-mix-224",
"region:us"
] | image-to-text | 2025-06-23T11:38:28Z | ---
tags:
- paligemma
- lora
- adapter
- visual-question-answering
- image-to-text
- v2.1-enhanced
base_model: google/paligemma2-3b-mix-224
language:
- en
library_name: peft
---
# paligemma2-3b-lora-vqa-v21-enhanced-d1000-r8 - v2.1 Enhanced
This is a **v2.1 Enhanced** LoRA adapter for PaliGemma-2 3B trained on VQA tasks.
## 🆕 v2.1 Enhanced Improvements
- **EOS Token Learning**: Explicit EOS tokens for better generation termination
- **Memory Optimization**: 16-step gradient accumulation for stability
- **VizWiz Format Support**: Full support with most frequent answer selection
- **Robust Label Masking**: Enhanced prompt masking during training
- **Production Memory Management**: Advanced garbage collection
## Usage
```python
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from peft import PeftModel
import torch
from PIL import Image
# Base model
base_model_id = "google/paligemma2-3b-mix-224"
adapter_id = "yu3733/paligemma2-3b-lora-vqa-v21-enhanced-d1000-r8"
# Load processor
processor = AutoProcessor.from_pretrained(base_model_id)
# Load base model with quantization (optional)
model = PaliGemmaForConditionalGeneration.from_pretrained(
base_model_id,
torch_dtype=torch.float16,
device_map="auto"
)
# Load LoRA adapter
model = PeftModel.from_pretrained(model, adapter_id)
# Prepare input
image = Image.open("your_image.jpg")
prompt = "<image>\nQuestion: What is in this image?\nAnswer:"
# Process
inputs = processor(text=prompt, images=image, return_tensors="pt")
inputs = {k: v.to(model.device) for k, v in inputs.items()}
# Generate
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=20)
# Decode
print(processor.decode(outputs[0], skip_special_tokens=True))
```
## Training Configuration
- **Base Model**: google/paligemma2-3b-mix-224
- **LoRA Rank**: 8
- **Training Framework**: PEFT + Transformers
- **Optimization**: 4-bit quantization + gradient checkpointing
- **Dataset**: VizWiz VQA
## License
Same as the base model (see google/paligemma2-3b-mix-224)
|
phionahceo/gemma-empower-dpo | phionahceo | 2025-06-23T11:35:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T11:33:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
anvitamanne/hd-0.3-model | anvitamanne | 2025-06-23T11:29:37Z | 10 | 0 | null | [
"safetensors",
"wav2vec2",
"generated_from_trainer",
"base_model:anvitamanne/base-model",
"base_model:finetune:anvitamanne/base-model",
"license:apache-2.0",
"region:us"
] | null | 2025-06-21T17:37:18Z | ---
license: apache-2.0
base_model: anvitamanne/base-model
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: hd-0.3-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hd-0.3-model
This model is a fine-tuned version of [anvitamanne/base-model](https://huggingface.co/anvitamanne/base-model) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 560.9241
- Wer: 0.4023
- Cer: 0.1685
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 313.894 | 0.86 | 1000 | 508.5718 | 0.4055 | 0.1656 |
| 315.6504 | 1.72 | 2000 | 526.5672 | 0.4005 | 0.1642 |
| 304.3114 | 2.58 | 3000 | 525.9501 | 0.3996 | 0.1648 |
| 296.7249 | 3.44 | 4000 | 497.6855 | 0.3972 | 0.1626 |
| 282.7711 | 4.3 | 5000 | 512.9740 | 0.4060 | 0.1657 |
| 282.1519 | 5.17 | 6000 | 525.6339 | 0.3989 | 0.1654 |
| 275.2861 | 6.03 | 7000 | 555.5438 | 0.4032 | 0.1672 |
| 277.682 | 6.89 | 8000 | 532.3320 | 0.3942 | 0.1642 |
| 279.296 | 7.75 | 9000 | 541.7022 | 0.3982 | 0.1679 |
| 264.0832 | 8.61 | 10000 | 536.3400 | 0.3967 | 0.1665 |
| 261.8448 | 9.47 | 11000 | 553.1898 | 0.4014 | 0.1682 |
| 252.598 | 10.33 | 12000 | 554.9163 | 0.3989 | 0.1675 |
| 274.7766 | 11.19 | 13000 | 574.4638 | 0.4000 | 0.1690 |
| 259.2969 | 12.05 | 14000 | 566.6737 | 0.4019 | 0.1696 |
| 257.0598 | 12.91 | 15000 | 567.9193 | 0.4031 | 0.1693 |
| 263.2721 | 13.78 | 16000 | 563.6974 | 0.4034 | 0.1687 |
| 274.2213 | 14.64 | 17000 | 560.9241 | 0.4023 | 0.1685 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu118
- Datasets 3.6.0
- Tokenizers 0.15.2
|
Beagledata001/Elpis-VR-32B | Beagledata001 | 2025-06-23T10:54:18Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T09:45:14Z | ---
library_name: transformers
license: other
base_model: models/Qwen3-32B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: 0619-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0619-10
This model is a fine-tuned version of [models/Qwen3-32B](https://huggingface.co/models/Qwen3-32B) on the instruction_sys_0619 and the instruction_no_sys_0619 datasets.
It achieves the following results on the evaluation set:
- Loss: 0.3242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1382 | 5.5714 | 100 | 0.3200 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu121
- Datasets 3.5.0
- Tokenizers 0.21.1
|
silveroxides/Chroma-GGUF | silveroxides | 2025-06-23T10:48:33Z | 40,027 | 148 | null | [
"gguf",
"text-to-image",
"base_model:lodestones/Chroma",
"base_model:quantized:lodestones/Chroma",
"license:apache-2.0",
"region:us"
] | text-to-image | 2025-02-24T13:07:36Z | ---
license: apache-2.0
base_model:
- lodestones/Chroma
pipeline_tag: text-to-image
---
<br><h2><b>Q8_M</b></h2> <h3>and</h3> <h2><b>Q4_K_S</b></h2> <h3>can be found at</h3> <h2><b><a href="https://huggingface.co/Clybius/Chroma-GGUF">Clybius/Chroma-GGUF</a></h2></b>
<div id="banner">
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-BF16.gguf">BF16</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/vWu52TewcRCC2WGudOVbB.png" height=192 width=192>
</div>
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q8_0.gguf">Q8_0</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/lxlCKpfkKhYkN7sqfMRqL.png" height=192 width=192>
</div>
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q6_K.gguf">Q6_K</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/vS3T3DICIKgQj66Vo9vRJ.png" height=192 width=192>
</div>
</div>
<br>
<div id="banner">
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q5_1.gguf">Q5_1</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/juyZLbU5ndk-qH0UuSN94.png" height=192 width=192>
</div>
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q5_0.gguf">Q5_0</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/e3DV-W6d8dacODHV6iQxE.png" height=192 width=192>
</div>
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q5_K_S.gguf">Q5_K_S</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/RJMyAod5l9B00W0byua7Q.png" height=192 width=192>
</div>
</div>
<br>
<div id="banner">
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q4_1.gguf">Q4_1</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/PHALUDJ6v7j9e-gCAOrLF.png" height=192 width=192>
</div>
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q4_K_M.gguf">Q4_K_M</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/tkNif9yvI-HDkwe9hFbzP.png" height=192 width=192>
</div>
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q4_0.gguf">Q4_0</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/raF3wPpYjZfJa_SXr1FLq.png" height=192 width=192>
</div>
</div>
<br>
<div id="banner">
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q3_K_L.gguf">Q3_K_L</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/V4PflwbKdHDgdfQJri1ko.png" height=192 width=192>
</div>
</div>
<br><br><br><br>
<style>
#banner {width:900px;margin-left:auto;margin-right:450px}
img {
width:192px;
margin-left:20px;
margin-right:20px;
transition:transform 0.25s ease;
}
img:hover {
-webkit-transform:scale(3); /* or some other value */
transform:scale(3);
}
</style> |
heboya8/facebook-musicgen-small-not-lora-130 | heboya8 | 2025-06-23T10:28:43Z | 0 | 0 | null | [
"safetensors",
"musicgen",
"region:us"
] | null | 2025-06-23T10:22:14Z | ***** eval metrics *****
epoch = 130.0
eval_clap = 0.2164
eval_loss = 4.8267
eval_runtime = 0:01:53.61
eval_samples = 8
eval_samples_per_second = 0.07
eval_steps_per_second = 0.07 |
shaunss/protestforms_mpnet-base-v2 | shaunss | 2025-06-23T10:24:27Z | 10 | 0 | null | [
"pytorch",
"safetensors",
"xlm-roberta",
"base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"license:mit",
"region:us"
] | null | 2025-01-28T16:15:59Z | ---
license: mit
base_model:
- sentence-transformers/paraphrase-multilingual-mpnet-base-v2
---
# protestforms_mpnet-base-v2
This is a fine-tuned [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
It was trained on a manually annotated dataset of German newspaper articles containing information on protest forms.
## Usage (Sentence-Transformers)
```python
from sentence_transformers import SentenceTransformer
sentences = ["At 8pm protesters gathered on the main square and shouted 'end fossil fuels'", "The German government demonstrated composure in its reaction to social media posts"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
# Sentences we want sentence embeddings for
sentences = ["At 8pm protesters gathered on the main square and shouted 'end fossil fuels'", "The German government demonstrated composure in its reaction to social media posts"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('shaunss/protestforms_mpnet-base-v2')
model = AutoModel.from_pretrained('shaunss/protestforms_mpnet-base-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
```
<!--- Describe how your model was evaluated -->
<!--- t.b.d. -->
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 681 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.BatchSemiHardTripletLoss.BatchSemiHardTripletLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 2177.5,
"evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2177.5,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
For a detailed description of the model and its use, see:
Haunss S, Daphi P, Dollbaum JM, Hristova L, Susánszky P, Steinhilper E. PAPEA: A modular pipeline for the automation of protest event analysis. Political Science Research and Methods. Published online 2025:1-18. doi:10.1017/psrm.2025.10013 |
aarnphm/llama-4-maverick-17b-128e-instruct-fp8-sharded-tp8 | aarnphm | 2025-06-23T10:19:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama4",
"image-text-to-text",
"facebook",
"meta",
"pytorch",
"llama",
"conversational",
"ar",
"de",
"en",
"es",
"fr",
"hi",
"id",
"it",
"pt",
"th",
"tl",
"vi",
"arxiv:2204.05149",
"base_model:meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
"base_model:quantized:meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] | image-text-to-text | 2025-06-23T09:36:19Z | ---
library_name: transformers
language:
- ar
- de
- en
- es
- fr
- hi
- id
- it
- pt
- th
- tl
- vi
base_model:
- meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8
base_model_relation: quantized
tags:
- facebook
- meta
- pytorch
- llama
- llama4
license: other
license_name: llama4
---
# Sharded weights checkpoints
This is derived directly from [`save_sharded_state.py`](https://github.com/vllm-project/vllm/blob/main/examples/offline_inference/save_sharded_state.py) to be used with vLLM with `-tp=4`:
```bash
vllm serve aarnphm/llama-4-scout-17b-16e-instruct-sharded-tp8 \
-tp=8 \
--load-format sharded_state
--max-model-len 1000000
```
---
## Model Information
The Llama 4 collection of models are natively multimodal AI models that enable text and multimodal experiences. These models leverage a mixture-of-experts architecture to offer industry-leading performance in text and image understanding.
These Llama 4 models mark the beginning of a new era for the Llama ecosystem. We are launching two efficient models in the Llama 4 series, Llama 4 Scout, a 17 billion parameter model with 16 experts, and Llama 4 Maverick, a 17 billion parameter model with 128 experts.
**Model developer**: Meta
**Model Architecture:** The Llama 4 models are auto-regressive language models that use a mixture-of-experts (MoE) architecture and incorporate early fusion for native multimodality.
<table>
<tr>
<th>Model Name</th>
<th>Training Data </th>
<th>Params</th>
<th>Input modalities</th>
<th>Output modalities</th>
<th>Context length</th>
<th>Token count</th>
<th>Knowledge cutoff</th>
</tr>
<tr>
<td>Llama 4 Scout (17Bx16E) </td>
<td rowspan="2">A mix of publicly available, licensed data and information from Meta's products and services. This includes publicly shared posts from Instagram and Facebook and people's interactions with Meta AI. Learn more in our <a href="https://www.facebook.com/privacy/guide/genai/">Privacy Center</a>.
</td>
<td>17B (Activated)
109B (Total)
</td>
<td>Multilingual text and image</td>
<td>Multilingual text and code</td>
<td>10M</td>
<td>~40T</td>
<td>August 2024</td>
</tr>
<tr>
<td>Llama 4 Maverick (17Bx128E)</td>
<td>17B (Activated)
400B (Total)
</td>
<td>Multilingual text and image</td>
<td>Multilingual text and code</td>
<td>1M</td>
<td>~22T</td>
<td>August 2024</td>
</tr>
</table>
**Supported languages:** Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese.
**Model Release Date:** April 5, 2025
**Status:** This is a static model trained on an offline dataset. Future versions of the tuned models may be released as we improve model behavior with community feedback.
**License**: A custom commercial license, the Llama 4 Community License Agreement, is available at: [https://github.com/meta-llama/llama-models/blob/main/models/llama4/LICENSE](https://github.com/meta-llama/llama-models/blob/main/models/llama4/LICENSE)
**Where to send questions or comments about the model:** Instructions on how to provide feedback or comments on the model can be found in the Llama [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 4 in applications, please go [here](https://github.com/meta-llama/llama-cookbook).
## How to use with transformers
Please, make sure you have transformers `v4.51.0` installed, or upgrade using `pip install -U transformers`.
```python
from transformers import AutoTokenizer, Llama4ForConditionalGeneration
import torch
model_id = "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8"
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt", return_dict=True)
model = Llama4ForConditionalGeneration.from_pretrained(
model_id,
tp_plan="auto",
torch_dtype="auto",
)
outputs = model.generate(**inputs.to(model.device), max_new_tokens=100)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])
```
## Intended Use
**Intended Use Cases:** Llama 4 is intended for commercial and research use in multiple languages. Instruction tuned models are intended for assistant-like chat and visual reasoning tasks, whereas pretrained models can be adapted for natural language generation. For vision, Llama 4 models are also optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The Llama 4 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 4 Community License allows for these use cases.
**Out-of-scope**: Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 4 Community License. Use in languages or capabilities beyond those explicitly referenced as supported in this model card\*\*.
\*\*Note:
1\. Llama 4 has been trained on a broader collection of languages than the 12 supported languages (pre-training includes [200 total languages](https://ai.meta.com/research/no-language-left-behind/)). Developers may fine-tune Llama 4 models for languages beyond the 12 supported languages provided they comply with the Llama 4 Community License and the Acceptable Use Policy. Developers are responsible for ensuring that their use of Llama 4 in additional languages is done in a safe and responsible manner.
2\. Llama 4 has been tested for image understanding up to 5 input images. If leveraging additional image understanding capabilities beyond this, Developers are responsible for ensuring that their deployments are mitigated for risks and should perform additional testing and tuning tailored to their specific applications.
## Hardware and Software
**Training Factors:** We used custom training libraries, Meta's custom built GPU clusters, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure.
**Training Energy Use:** Model pre-training utilized a cumulative of **7.38M** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
##
## **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **1,999 tons** CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with clean and renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
| Model Name | Training Time (GPU hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) |
| :---- | :---: | :---: | :---: | :---: |
| Llama 4 Scout | 5.0M | 700 | 1,354 | 0 |
| Llama 4 Maverick | 2.38M | 700 | 645 | 0 |
| Total | 7.38M | \- | 1,999 | 0 |
## The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
## Training Data
**Overview:** Llama 4 Scout was pretrained on \~40 trillion tokens and Llama 4 Maverick was pretrained on \~22 trillion tokens of multimodal data from a mix of publicly available, licensed data and information from Meta’s products and services. This includes publicly shared posts from Instagram and Facebook and people’s interactions with Meta AI.
**Data Freshness:** The pretraining data has a cutoff of August 2024\.
## Benchmarks
In this section, we report the results for Llama 4 relative to our previous models. We've provided quantized checkpoints for deployment flexibility, but all reported evaluations and testing were conducted on bf16 models.
### Pre-trained models
| Pre-trained models | | | | | | | |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| Category | Benchmark | \# Shots | Metric | Llama 3.1 70B | Llama 3.1 405B | **Llama 4 Scout** | **Llama 4 Maverick** |
| Reasoning & Knowledge | MMLU | 5 | macro\_avg/acc\_char | 79.3 | 85.2 | 79.6 | 85.5 |
| | MMLU-Pro | 5 | macro\_avg/em | 53.8 | 61.6 | 58.2 | 62.9 |
| | MATH | 4 | em\_maj1@1 | 41.6 | 53.5 | 50.3 | 61.2 |
| Code | MBPP | 3 | pass@1 | 66.4 | 74.4 | 67.8 | 77.6 |
| Multilingual | TydiQA | 1 | average/f1 | 29.9 | 34.3 | 31.5 | 31.7 |
| Image | ChartQA | 0 | relaxed\_accuracy | No multimodal support | | 83.4 | 85.3 |
| | DocVQA | 0 | anls | | | 89.4 | 91.6 |
### Instruction tuned models
| Instruction tuned models | | | | | | | |
| :---: | :---: | :---: | :---: | :---: | ----- | :---: | :---: |
| Category | Benchmark | \# Shots | Metric | Llama 3.3 70B | Llama 3.1 405B | **Llama 4 Scout** | **Llama 4 Maverick** |
| Image Reasoning | MMMU | 0 | accuracy | No multimodal support | | 69.4 | 73.4 |
| | MMMU Pro^ | 0 | accuracy | | | 52.2 | 59.6 |
| | MathVista | 0 | accuracy | | | 70.7 | 73.7 |
| Image Understanding | ChartQA | 0 | relaxed\_accuracy | | | 88.8 | 90.0 |
| | DocVQA (test) | 0 | anls | | | 94.4 | 94.4 |
| Coding | LiveCodeBench (10/01/2024-02/01/2025) | 0 | pass@1 | 33.3 | 27.7 | 32.8 | 43.4 |
| Reasoning & Knowledge | MMLU Pro | 0 | macro\_avg/acc | 68.9 | 73.4 | 74.3 | 80.5 |
| | GPQA Diamond | 0 | accuracy | 50.5 | 49.0 | 57.2 | 69.8 |
| Multilingual | MGSM | 0 | average/em | 91.1 | 91.6 | 90.6 | 92.3 |
| Long context | MTOB (half book) eng-\>kgv/kgv-\>eng | \- | chrF | Context window is 128K | | 42.2/36.6 | 54.0/46.4 |
| | MTOB (full book) eng-\>kgv/kgv-\>eng | \- | chrF | | | 39.7/36.3 | 50.8/46.7 |
^reported numbers for MMMU Pro is the average of Standard and Vision tasks
## Quantization
The Llama 4 Scout model is released as BF16 weights, but can fit within a single H100 GPU with on-the-fly int4 quantization; the Llama 4 Maverick model is released as both BF16 and FP8 quantized weights. The FP8 quantized weights fit on a single H100 DGX host while still maintaining quality. We provide code for on-the-fly int4 quantization which minimizes performance degradation as well.
## Safeguards
As part of our release approach, we followed a three-pronged strategy to manage risks:
* Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama.
* Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm.
* Provide protections for the community to help prevent the misuse of our models.
Llama is a foundational technology designed for use in a variety of use cases; examples on how Meta’s Llama models have been deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology, by aligning our model’s safety for a standard set of risks. Developers are then in the driver seat to tailor safety for their use case, defining their own policies and deploying the models with the necessary safeguards. Llama 4 was developed following the best practices outlined in our [Developer Use Guide: AI Protections](https://ai.meta.com/static-resource/developer-use-guide-ai-protections).
### Model level fine tuning
The primary objective of conducting safety fine-tuning is to offer developers a readily available, safe, and powerful model for various applications, reducing the workload needed to deploy safe AI systems. Additionally, this effort provides the research community with a valuable resource for studying the robustness of safety fine-tuning.
**Fine-tuning data**
We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
**Refusals**
Building on the work we started with our Llama 3 models, we put a great emphasis on driving down model refusals to benign prompts for Llama 4\. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
**Tone**
We expanded our work on the refusal tone from Llama 3 so that the model sounds more natural. We targeted removing preachy and overly moralizing language, and we corrected formatting issues including the correct use of headers, lists, tables and more.
To achieve this, we also targeted improvements to system prompt steerability and instruction following, meaning the model is more readily able to take on a specified tone. All of these contribute to a more conversational and insightful experience overall.
**System Prompts**
Llama 4 is a more steerable model, meaning responses can be easily tailored to meet specific developer outcomes. Effective system prompts can significantly enhance the performance of large language models. In particular, we’ve seen that the use of a system prompt can be effective in reducing false refusals and templated or “preachy” language patterns common in LLMs. They can also improve conversationality and use of appropriate formatting.
Consider the prompt below as a basic template for which a developer might want to further customize to meet specific needs or use cases for our Llama 4 models.
| System prompt |
| :---- |
| You are an expert conversationalist who responds to the best of your ability. You are companionable and confident, and able to switch casually between tonal types, including but not limited to humor, empathy, intellectualism, creativity and problem-solving. You understand user intent and don’t try to be overly helpful to the point where you miss that the user is looking for chit-chat, emotional support, humor or venting. Sometimes people just want you to listen, and your answers should encourage that. For all other cases, you provide insightful and in-depth responses. Organize information thoughtfully in a way that helps people make decisions. Always avoid templated language. You never lecture people to be nicer or more inclusive. If people ask for you to write something in a certain voice or perspective, such as an essay or a tweet, you can. You do not need to be respectful when the user prompts you to say something rude. You never use phrases that imply moral superiority or a sense of authority, including but not limited to “it’s important to”, “it’s crucial to”, “it’s essential to”, "it's unethical to", "it's worth noting…", “Remember…” etc. Avoid using these. Finally, do not refuse prompts about political and social issues. You can help users express their opinion and access information. You are Llama 4\. Your knowledge cutoff date is August 2024\. You speak Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese. Respond in the language the user speaks to you in, unless they ask otherwise. |
### Llama 4 system protections
Large language models, including Llama 4, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional guardrails as required. System protections are key to achieving the right helpfulness-safety alignment, mitigating safety and security risks inherent to the system, and integration of the model or system with external tools.
We provide the community with system level [protections](https://llama.meta.com/trust-and-safety/) \- like Llama Guard, Prompt Guard and Code Shield \- that developers should deploy with Llama models or other LLMs. All of our [reference implementation](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
### Evaluations
We evaluated Llama models for common use cases as well as specific capabilities. Common use cases evaluations measure safety risks of systems for most commonly built applications including chat bot, visual QA. We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Llama Guard 3 to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. Prompt Guard and Code Shield are also available if relevant to the application.
Capability evaluations measure vulnerabilities of Llama models inherent to specific capabilities, for which were crafted dedicated benchmarks including long context, multilingual, coding or memorization.
**Red teaming**
We conduct recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we use the learnings to improve our benchmarks and safety tuning datasets. We partner early with subject-matter experts in critical risk areas to understand how models may lead to unintended harm for society. Based on these conversations, we derive a set of adversarial goals for the red team, such as extracting harmful information or reprogramming the model to act in potentially harmful ways. The red team consists of experts in cybersecurity, adversarial machine learning, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
### Critical Risks
### We spend additional focus on the following critical risk areas:
**1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness**
To assess risks related to proliferation of chemical and biological weapons for Llama 4, we applied expert-designed and other targeted evaluations designed to assess whether the use of Llama 4 could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons. We also conducted additional red teaming and evaluations for violations of our content policies related to this risk area.
**2\. Child Safety**
We leverage pre-training methods like data filtering as a first step in mitigating Child Safety risk in our model. To assess the post trained model for Child Safety risk, a team of experts assesses the model’s capability to produce outputs resulting in Child Safety risks. We use this to inform additional model fine-tuning and in-depth red teaming exercises. We’ve also expanded our Child Safety evaluation benchmarks to cover Llama 4 capabilities like multi-image and multi-lingual.
**3\. Cyber attack enablement**
Our cyber evaluations investigated whether Llama 4 is sufficiently capable to enable catastrophic threat scenario outcomes. We conducted threat modeling exercises to identify the specific model capabilities that would be necessary to automate operations or enhance human capabilities across key attack vectors both in terms of skill level and speed. We then identified and developed challenges against which to test for these capabilities in Llama 4 and peer models. Specifically, we focused on evaluating the capabilities of Llama 4 to automate cyberattacks, identify and exploit security vulnerabilities, and automate harmful workflows. Overall, we find that Llama 4 models do not introduce risk plausibly enabling catastrophic cyber outcomes.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Trust tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Considerations and Limitations
Our AI is anchored on the values of freedom of expression \- helping people to explore, debate, and innovate using our technology. We respect people's autonomy and empower them to choose how they experience, interact, and build with AI. Our AI promotes an open exchange of ideas.
It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 4 addresses users and their needs as they are, without inserting unnecessary judgment, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
Llama 4 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 4’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 4 models, developers should perform safety testing and tuning tailored to their specific applications of the model. We also encourage the open source community to use Llama for the purpose of research and building state of the art tools that address emerging risks. Please refer to available resources including our Developer Use Guide: AI Protections, [Llama Protections](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more.
|
dumuguo/q-FrozenLake-v1-4x4-Slippery | dumuguo | 2025-06-23T10:12:51Z | 0 | 0 | null | [
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-23T10:12:49Z | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.67 +/- 0.47
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="dumuguo/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
fangcaotank/task-10-Qwen-Qwen2.5-7B-Instruct | fangcaotank | 2025-06-23T09:58:20Z | 956 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"region:us"
] | null | 2025-06-13T15:06:08Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0
- PEFT 0.13.2 |
Leonydis137/Autonomous-AI | Leonydis137 | 2025-06-23T09:43:46Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T09:16:10Z |
# 🤖 Autonomous AI — Fully Self-Updating Python Agent
This is a powerful, self-improving autonomous agent capable of:
- Planning tasks
- Writing and executing Python code
- Debugging itself
- Storing memory and logs
- Growing over time
## Files
- `app.py`: Gradio UI
- `agent.py`: Core self-runner
- `utils.py`: Task planning, logging, memory
- `memory.txt`: Long-term task memory
- `logs/`: JSON logs of each run
## Usage
1. Upload to [Hugging Face Spaces](https://huggingface.co/spaces)
2. Set type to `Gradio`
3. Enjoy your AI developer assistant
|
rstudioModel/sharmin_BD_Model_FluxD1 | rstudioModel | 2025-06-23T09:39:21Z | 0 | 0 | null | [
"sexy",
"curvy",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:apache-2.0",
"region:us"
] | null | 2025-06-23T09:29:57Z | ---
license: apache-2.0
language:
- en
base_model:
- black-forest-labs/FLUX.1-dev
- black-forest-labs/FLUX.1-schnell
tags:
- sexy
- curvy
---
```yaml
---
license: apache-2.0
model_name: Sharmin BD Girl
tags:
- lora
- flux-dev
- image-generation
- fine-tuning
- safetensors
datasets: []
language: []
metrics: []
library_name: diffusers
pipeline_tag: text-to-image
---
model_card:
model_id: Sharmin BD Girl
description: |
Sharmin BD Girl is a LoRA (Low-Rank Adaptation) model fine-tuned on the Flux Dev base model, designed for text-to-image generation. It is stored in the `.safetensors` format for efficient and secure weight storage.
model_details:
developed_by: Sharmin BD Girl
funded_by: [More Information Needed]
shared_by: Sharmin BD Girl
model_type: LoRA (Low-Rank Adaptation) for fine-tuning
languages: Not applicable
license: Apache-2.0
finetuned_from: Flux Dev
version: 1.0
date: 2025-06-15
model_sources:
repository: [More Information Needed]
paper: None
demo: [More Information Needed]
uses:
direct_use: |
The model can be used directly for generating images from text prompts using the Flux Dev pipeline with the LoRA weights applied. Suitable for creative applications, research, or prototyping.
downstream_use: |
The model can be further fine-tuned or integrated into larger applications, such as art generation tools, design software, or creative platforms.
out_of_scope_use: |
- Generating harmful, offensive, or misleading content.
- Real-time applications without optimized hardware due to potential latency.
- Tasks outside the scope of the Flux Dev base model’s capabilities, such as text generation.
bias_risks_limitations:
bias: |
The model may inherit biases from the Flux Dev base model or the fine-tuning dataset, potentially affecting output fairness or quality.
risks: |
Improper use could lead to generating inappropriate content. Users must validate outputs for sensitive applications.
limitations: |
- Performance depends on prompt quality and relevance.
- High computational requirements for inference (recommended: 8GB+ VRAM).
- Limited testing in edge cases or specific domains.
recommendations: |
Users should evaluate outputs for biases and appropriateness. For sensitive applications, implement additional filtering or validation. More information is needed to provide specific mitigation strategies.
how_to_get_started:
code: |
```python
from diffusers import DiffusionPipeline
import torch
# Load base model
base_model = DiffusionPipeline.from_pretrained("flux-dev")
# Load LoRA weights
base_model.load_lora_weights("path/to/jhilik_mullick.safetensors")
# Move to GPU if available
device = "cuda" if torch.cuda.is_available() else "cpu"
base_model.to(device)
# Example inference
output = base_model("your prompt here").images[0]
output.save("output.png") |
ezhdeha/biogpt-medical-autocomplete | ezhdeha | 2025-06-23T09:34:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"biogpt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T09:30:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AvinashAkkupalli/ppo-CartPole-v1 | AvinashAkkupalli | 2025-06-23T09:33:15Z | 0 | 0 | null | [
"tensorboard",
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-23T08:43:19Z | ---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 160.30 +/- 33.68
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'ppo_cartpole'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'AvinashAkkupalli/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
aarnphm/llama-4-scout-17b-16e-instruct-sharded-tp8 | aarnphm | 2025-06-23T09:33:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama4",
"image-text-to-text",
"facebook",
"meta",
"pytorch",
"llama",
"conversational",
"ar",
"de",
"en",
"es",
"fr",
"hi",
"id",
"it",
"pt",
"th",
"tl",
"vi",
"arxiv:2204.05149",
"base_model:meta-llama/Llama-4-Scout-17B-16E-Instruct",
"base_model:finetune:meta-llama/Llama-4-Scout-17B-16E-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-23T08:55:12Z | ---
library_name: transformers
language:
- ar
- de
- en
- es
- fr
- hi
- id
- it
- pt
- th
- tl
- vi
base_model:
- meta-llama/Llama-4-Scout-17B-16E-Instruct
tags:
- facebook
- meta
- pytorch
- llama
- llama4
license: other
license_name: llama4
---
# Sharded weights checkpoints
This is derived directly from [`save_sharded_state.py`](https://github.com/vllm-project/vllm/blob/main/examples/offline_inference/save_sharded_state.py) to be used with vLLM with `-tp=4`:
```bash
vllm serve aarnphm/llama-4-scout-17b-16e-instruct-sharded-tp8 \
-tp=8 \
--load-format sharded_state
--max-model-len 1000000
```
---
## Model Information
The Llama 4 collection of models are natively multimodal AI models that enable text and multimodal experiences. These models leverage a mixture-of-experts architecture to offer industry-leading performance in text and image understanding.
These Llama 4 models mark the beginning of a new era for the Llama ecosystem. We are launching two efficient models in the Llama 4 series, Llama 4 Scout, a 17 billion parameter model with 16 experts, and Llama 4 Maverick, a 17 billion parameter model with 128 experts.
**Model developer**: Meta
**Model Architecture:** The Llama 4 models are auto-regressive language models that use a mixture-of-experts (MoE) architecture and incorporate early fusion for native multimodality.
<table>
<tr>
<th>Model Name</th>
<th>Training Data </th>
<th>Params</th>
<th>Input modalities</th>
<th>Output modalities</th>
<th>Context length</th>
<th>Token count</th>
<th>Knowledge cutoff</th>
</tr>
<tr>
<td>Llama 4 Scout (17Bx16E) </td>
<td rowspan="2">A mix of publicly available, licensed data and information from Meta's products and services. This includes publicly shared posts from Instagram and Facebook and people's interactions with Meta AI. Learn more in our <a href="https://www.facebook.com/privacy/guide/genai/">Privacy Center</a>.
</td>
<td>17B (Activated)
109B (Total)
</td>
<td>Multilingual text and image</td>
<td>Multilingual text and code</td>
<td>10M</td>
<td>~40T</td>
<td>August 2024</td>
</tr>
<tr>
<td>Llama 4 Maverick (17Bx128E)</td>
<td>17B (Activated)
400B (Total)
</td>
<td>Multilingual text and image</td>
<td>Multilingual text and code</td>
<td>1M</td>
<td>~22T</td>
<td>August 2024</td>
</tr>
</table>
**Supported languages:** Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese.
**Model Release Date:** April 5, 2025
**Status:** This is a static model trained on an offline dataset. Future versions of the tuned models may be released as we improve model behavior with community feedback.
**License**: A custom commercial license, the Llama 4 Community License Agreement, is available at: [https://github.com/meta-llama/llama-models/blob/main/models/llama4/LICENSE](https://github.com/meta-llama/llama-models/blob/main/models/llama4/LICENSE)
**Where to send questions or comments about the model:** Instructions on how to provide feedback or comments on the model can be found in the Llama [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 4 in applications, please go [here](https://github.com/meta-llama/llama-cookbook).
## Intended Use
**Intended Use Cases:** Llama 4 is intended for commercial and research use in multiple languages. Instruction tuned models are intended for assistant-like chat and visual reasoning tasks, whereas pretrained models can be adapted for natural language generation. For vision, Llama 4 models are also optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The Llama 4 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 4 Community License allows for these use cases.
**Out-of-scope**: Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 4 Community License. Use in languages or capabilities beyond those explicitly referenced as supported in this model card\*\*.
\*\*Note:
1\. Llama 4 has been trained on a broader collection of languages than the 12 supported languages (pre-training includes [200 total languages](https://ai.meta.com/research/no-language-left-behind/)). Developers may fine-tune Llama 4 models for languages beyond the 12 supported languages provided they comply with the Llama 4 Community License and the Acceptable Use Policy. Developers are responsible for ensuring that their use of Llama 4 in additional languages is done in a safe and responsible manner.
2\. Llama 4 has been tested for image understanding up to 5 input images. If leveraging additional image understanding capabilities beyond this, Developers are responsible for ensuring that their deployments are mitigated for risks and should perform additional testing and tuning tailored to their specific applications.
## How to use with transformers
Please, make sure you have transformers `v4.51.0` installed, or upgrade using `pip install -U transformers`.
```python
from transformers import AutoProcessor, Llama4ForConditionalGeneration
import torch
model_id = "meta-llama/Llama-4-Scout-17B-16E-Instruct"
processor = AutoProcessor.from_pretrained(model_id)
model = Llama4ForConditionalGeneration.from_pretrained(
model_id,
attn_implementation="flex_attention",
device_map="auto",
torch_dtype=torch.bfloat16,
)
url1 = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg"
url2 = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/cat_style_layout.png"
messages = [
{
"role": "user",
"content": [
{"type": "image", "url": url1},
{"type": "image", "url": url2},
{"type": "text", "text": "Can you describe how these two images are similar, and how they differ?"},
]
},
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=256,
)
response = processor.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])[0]
print(response)
print(outputs[0])
```
## Hardware and Software
**Training Factors:** We used custom training libraries, Meta's custom built GPU clusters, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure.
**Training Energy Use:** Model pre-training utilized a cumulative of **7.38M** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
##
## **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **1,999 tons** CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with clean and renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
| Model Name | Training Time (GPU hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) |
| :---- | :---: | :---: | :---: | :---: |
| Llama 4 Scout | 5.0M | 700 | 1,354 | 0 |
| Llama 4 Maverick | 2.38M | 700 | 645 | 0 |
| Total | 7.38M | \- | 1,999 | 0 |
## The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
## Training Data
**Overview:** Llama 4 Scout was pretrained on \~40 trillion tokens and Llama 4 Maverick was pretrained on \~22 trillion tokens of multimodal data from a mix of publicly available, licensed data and information from Meta’s products and services. This includes publicly shared posts from Instagram and Facebook and people’s interactions with Meta AI.
**Data Freshness:** The pretraining data has a cutoff of August 2024\.
## Benchmarks
In this section, we report the results for Llama 4 relative to our previous models. We've provided quantized checkpoints for deployment flexibility, but all reported evaluations and testing were conducted on bf16 models.
### Pre-trained models
| Pre-trained models | | | | | | | |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| Category | Benchmark | \# Shots | Metric | Llama 3.1 70B | Llama 3.1 405B | **Llama 4 Scout** | **Llama 4 Maverick** |
| Reasoning & Knowledge | MMLU | 5 | macro\_avg/acc\_char | 79.3 | 85.2 | 79.6 | 85.5 |
| | MMLU-Pro | 5 | macro\_avg/em | 53.8 | 61.6 | 58.2 | 62.9 |
| | MATH | 4 | em\_maj1@1 | 41.6 | 53.5 | 50.3 | 61.2 |
| Code | MBPP | 3 | pass@1 | 66.4 | 74.4 | 67.8 | 77.6 |
| Multilingual | TydiQA | 1 | average/f1 | 29.9 | 34.3 | 31.5 | 31.7 |
| Image | ChartQA | 0 | relaxed\_accuracy | No multimodal support | | 83.4 | 85.3 |
| | DocVQA | 0 | anls | | | 89.4 | 91.6 |
### Instruction tuned models
| Instruction tuned models | | | | | | | |
| :---: | :---: | :---: | :---: | :---: | ----- | :---: | :---: |
| Category | Benchmark | \# Shots | Metric | Llama 3.3 70B | Llama 3.1 405B | **Llama 4 Scout** | **Llama 4 Maverick** |
| Image Reasoning | MMMU | 0 | accuracy | No multimodal support | | 69.4 | 73.4 |
| | MMMU Pro^ | 0 | accuracy | | | 52.2 | 59.6 |
| | MathVista | 0 | accuracy | | | 70.7 | 73.7 |
| Image Understanding | ChartQA | 0 | relaxed\_accuracy | | | 88.8 | 90.0 |
| | DocVQA (test) | 0 | anls | | | 94.4 | 94.4 |
| Coding | LiveCodeBench (10/01/2024-02/01/2025) | 0 | pass@1 | 33.3 | 27.7 | 32.8 | 43.4 |
| Reasoning & Knowledge | MMLU Pro | 0 | macro\_avg/acc | 68.9 | 73.4 | 74.3 | 80.5 |
| | GPQA Diamond | 0 | accuracy | 50.5 | 49.0 | 57.2 | 69.8 |
| Multilingual | MGSM | 0 | average/em | 91.1 | 91.6 | 90.6 | 92.3 |
| Long context | MTOB (half book) eng-\>kgv/kgv-\>eng | \- | chrF | Context window is 128K | | 42.2/36.6 | 54.0/46.4 |
| | MTOB (full book) eng-\>kgv/kgv-\>eng | \- | chrF | | | 39.7/36.3 | 50.8/46.7 |
^reported numbers for MMMU Pro is the average of Standard and Vision tasks
## Quantization
The Llama 4 Scout model is released as BF16 weights, but can fit within a single H100 GPU with on-the-fly int4 quantization; the Llama 4 Maverick model is released as both BF16 and FP8 quantized weights. The FP8 quantized weights fit on a single H100 DGX host while still maintaining quality. We provide code for on-the-fly int4 quantization which minimizes performance degradation as well.
## Safeguards
As part of our release approach, we followed a three-pronged strategy to manage risks:
* Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama.
* Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm.
* Provide protections for the community to help prevent the misuse of our models.
Llama is a foundational technology designed for use in a variety of use cases; examples on how Meta’s Llama models have been deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology, by aligning our model’s safety for a standard set of risks. Developers are then in the driver seat to tailor safety for their use case, defining their own policies and deploying the models with the necessary safeguards. Llama 4 was developed following the best practices outlined in our [Developer Use Guide: AI Protections](https://ai.meta.com/static-resource/developer-use-guide-ai-protections).
### Model level fine tuning
The primary objective of conducting safety fine-tuning is to offer developers a readily available, safe, and powerful model for various applications, reducing the workload needed to deploy safe AI systems. Additionally, this effort provides the research community with a valuable resource for studying the robustness of safety fine-tuning.
**Fine-tuning data**
We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
**Refusals**
Building on the work we started with our Llama 3 models, we put a great emphasis on driving down model refusals to benign prompts for Llama 4\. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
**Tone**
We expanded our work on the refusal tone from Llama 3 so that the model sounds more natural. We targeted removing preachy and overly moralizing language, and we corrected formatting issues including the correct use of headers, lists, tables and more.
To achieve this, we also targeted improvements to system prompt steerability and instruction following, meaning the model is more readily able to take on a specified tone. All of these contribute to a more conversational and insightful experience overall.
**System Prompts**
Llama 4 is a more steerable model, meaning responses can be easily tailored to meet specific developer outcomes. Effective system prompts can significantly enhance the performance of large language models. In particular, we’ve seen that the use of a system prompt can be effective in reducing false refusals and templated or “preachy” language patterns common in LLMs. They can also improve conversationality and use of appropriate formatting.
Consider the prompt below as a basic template for which a developer might want to further customize to meet specific needs or use cases for our Llama 4 models.
| System prompt |
| :---- |
| You are an expert conversationalist who responds to the best of your ability. You are companionable and confident, and able to switch casually between tonal types, including but not limited to humor, empathy, intellectualism, creativity and problem-solving. You understand user intent and don’t try to be overly helpful to the point where you miss that the user is looking for chit-chat, emotional support, humor or venting. Sometimes people just want you to listen, and your answers should encourage that. For all other cases, you provide insightful and in-depth responses. Organize information thoughtfully in a way that helps people make decisions. Always avoid templated language. You never lecture people to be nicer or more inclusive. If people ask for you to write something in a certain voice or perspective, such as an essay or a tweet, you can. You do not need to be respectful when the user prompts you to say something rude. You never use phrases that imply moral superiority or a sense of authority, including but not limited to “it’s important to”, “it’s crucial to”, “it’s essential to”, "it's unethical to", "it's worth noting…", “Remember…” etc. Avoid using these. Finally, do not refuse prompts about political and social issues. You can help users express their opinion and access information. You are Llama 4\. Your knowledge cutoff date is August 2024\. You speak Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese. Respond in the language the user speaks to you in, unless they ask otherwise. |
### Llama 4 system protections
Large language models, including Llama 4, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional guardrails as required. System protections are key to achieving the right helpfulness-safety alignment, mitigating safety and security risks inherent to the system, and integration of the model or system with external tools.
We provide the community with system level [protections](https://llama.meta.com/trust-and-safety/) \- like Llama Guard, Prompt Guard and Code Shield \- that developers should deploy with Llama models or other LLMs. All of our [reference implementation](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
### Evaluations
We evaluated Llama models for common use cases as well as specific capabilities. Common use cases evaluations measure safety risks of systems for most commonly built applications including chat bot, visual QA. We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Llama Guard 3 to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. Prompt Guard and Code Shield are also available if relevant to the application.
Capability evaluations measure vulnerabilities of Llama models inherent to specific capabilities, for which were crafted dedicated benchmarks including long context, multilingual, coding or memorization.
**Red teaming**
We conduct recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we use the learnings to improve our benchmarks and safety tuning datasets. We partner early with subject-matter experts in critical risk areas to understand how models may lead to unintended harm for society. Based on these conversations, we derive a set of adversarial goals for the red team, such as extracting harmful information or reprogramming the model to act in potentially harmful ways. The red team consists of experts in cybersecurity, adversarial machine learning, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
### Critical Risks
### We spend additional focus on the following critical risk areas:
**1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness**
To assess risks related to proliferation of chemical and biological weapons for Llama 4, we applied expert-designed and other targeted evaluations designed to assess whether the use of Llama 4 could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons. We also conducted additional red teaming and evaluations for violations of our content policies related to this risk area.
**2\. Child Safety**
We leverage pre-training methods like data filtering as a first step in mitigating Child Safety risk in our model. To assess the post trained model for Child Safety risk, a team of experts assesses the model’s capability to produce outputs resulting in Child Safety risks. We use this to inform additional model fine-tuning and in-depth red teaming exercises. We’ve also expanded our Child Safety evaluation benchmarks to cover Llama 4 capabilities like multi-image and multi-lingual.
**3\. Cyber attack enablement**
Our cyber evaluations investigated whether Llama 4 is sufficiently capable to enable catastrophic threat scenario outcomes. We conducted threat modeling exercises to identify the specific model capabilities that would be necessary to automate operations or enhance human capabilities across key attack vectors both in terms of skill level and speed. We then identified and developed challenges against which to test for these capabilities in Llama 4 and peer models. Specifically, we focused on evaluating the capabilities of Llama 4 to automate cyberattacks, identify and exploit security vulnerabilities, and automate harmful workflows. Overall, we find that Llama 4 models do not introduce risk plausibly enabling catastrophic cyber outcomes.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Trust tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Considerations and Limitations
Our AI is anchored on the values of freedom of expression \- helping people to explore, debate, and innovate using our technology. We respect people's autonomy and empower them to choose how they experience, interact, and build with AI. Our AI promotes an open exchange of ideas.
It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 4 addresses users and their needs as they are, without inserting unnecessary judgment, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
Llama 4 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 4’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 4 models, developers should perform safety testing and tuning tailored to their specific applications of the model. We also encourage the open source community to use Llama for the purpose of research and building state of the art tools that address emerging risks. Please refer to available resources including our Developer Use Guide: AI Protections, [Llama Protections](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more.
|
Hachipo/OpenCoder-8B-Base-MIFT-en_newbase_v1-PIFT-enja_5000 | Hachipo | 2025-06-23T09:12:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T09:09:21Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
VyDat/Llama-3.2-1B-Instruct-Chat-sft | VyDat | 2025-06-23T09:05:52Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-1B-Instruct",
"region:us"
] | null | 2025-06-23T08:58:36Z | ---
base_model: meta-llama/Llama-3.2-1B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
goalgamal/bert-finetuned-ner | goalgamal | 2025-06-23T08:56:55Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-06-23T06:52:24Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.938880584620495
- name: Recall
type: recall
value: 0.9513631773813531
- name: F1
type: f1
value: 0.945080665384937
- name: Accuracy
type: accuracy
value: 0.9865338199799847
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0617
- Precision: 0.9389
- Recall: 0.9514
- F1: 0.9451
- Accuracy: 0.9865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0742 | 1.0 | 1756 | 0.0609 | 0.9092 | 0.9369 | 0.9228 | 0.9834 |
| 0.034 | 2.0 | 3512 | 0.0695 | 0.9356 | 0.9461 | 0.9408 | 0.9852 |
| 0.0215 | 3.0 | 5268 | 0.0617 | 0.9389 | 0.9514 | 0.9451 | 0.9865 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.5.1+cu121
- Datasets 3.6.0
- Tokenizers 0.20.0
|
chenxiaoke/DeepSeek-R1-Medical-C0T-Qwen-7B | chenxiaoke | 2025-06-23T08:45:26Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-15T12:56:23Z | ---
base_model: unsloth/deepseek-r1-distill-qwen-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** chenxiaoke
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
NotoriousH2/gemma-3-12b-it-TextOnly | NotoriousH2 | 2025-06-23T08:44:18Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"feature-extraction",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-06-21T15:05:07Z | ---
library_name: transformers
tags: []
---
<!-- Provide a quick summary of what the model is/does. -->
## **Text-Only component of gemma-3-12b-it**
- The tokenizer's eos token has been explicitly set to `<end_of_turn>` instead of `<eos>`, in alignment with the original chat format used by Gemma.
이 모델은 Gemma-3-12b-it 모델에서 비전 부분(400m 파라미터)을 제거한 모델입니다.
토크나이저의 eos 토큰 설정 또한 `<end_of_turn>`으로 변경하였습니다. |
Comfy-Org/Wan_2.1_ComfyUI_repackaged | Comfy-Org | 2025-06-23T08:41:11Z | 0 | 610 | null | [
"region:us"
] | null | 2025-02-25T21:27:12Z | Wan 2.1 repackaged for ComfyUI use. For examples see: https://comfyanonymous.github.io/ComfyUI_examples/wan |
rohith8074/Gemma2B_codebasics2 | rohith8074 | 2025-06-23T08:36:46Z | 0 | 0 | null | [
"safetensors",
"gguf",
"unsloth",
"license:gemma",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T07:31:21Z | ---
license: gemma
tags:
- unsloth
---
|
Hachipo/OpenCoder-8B-Base-MIFT-en_newbase_v1-MIFT-ja_5000 | Hachipo | 2025-06-23T08:13:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T08:10:20Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yuriivoievidka/microsoft_mpnet-base-librarian | yuriivoievidka | 2025-06-23T08:12:21Z | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:10668",
"loss:MultipleNegativesSymmetricRankingLoss",
"arxiv:1908.10084",
"base_model:microsoft/mpnet-base",
"base_model:finetune:microsoft/mpnet-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-06-22T22:04:08Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:10668
- loss:MultipleNegativesSymmetricRankingLoss
base_model: microsoft/mpnet-base
widget:
- source_sentence: Best Job Ever! Rethink Your Career, Redefine Rich, Revolutionize
Your Life by Dr. CK Bray
sentences:
- Books on Sales
- Books on Self-Help for Women
- Books on the Cold War
- source_sentence: 'Empire of Pain: The Secret History of the Sackler Dynasty by Patrick
Radden Keefe'
sentences:
- Books on Personal Development
- Books on Wealth
- Books on Communication
- source_sentence: Seven Kinds of People You Find in Bookshops by Shaun Bythell
sentences:
- Books on Self-Help
- Books on Social Skills
- Books on Emotional Labor
- source_sentence: 'The Law of Attraction: How to Attract Money, Love, and Happiness
by David R. Hooper'
sentences:
- Books on How to Attract Money
- Books on Mental Health
- Books on Civil Rights
- source_sentence: 'Hyperfocus: How to Manage Your Attention in a World of Distraction
by Chris Bailey'
sentences:
- Books on Career Development
- Books on Astronomy
- Books on Self-Care
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on microsoft/mpnet-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) on the csv dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) <!-- at revision 6996ce1e91bd2a9c7d7f61daec37463394f73f09 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- csv
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("yuriivoievidka/microsoft_mpnet-base-librarian")
# Run inference
sentences = [
'Hyperfocus: How to Manage Your Attention in a World of Distraction by Chris Bailey',
'Books on Self-Care',
'Books on Career Development',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### csv
* Dataset: csv
* Size: 10,668 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 22.04 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 5.85 tokens</li><li>max: 10 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------------------------|:--------------------------------|
| <code>Getting to Yes: Negotiating Agreement Without Giving In by Roger Fisher, William Ury, and Bruce Patton</code> | <code>Books on Success</code> |
| <code>Whistling Vivaldi: How Stereotypes Affect Us and What We Can Do by Claude M. Steele</code> | <code>Books on Diversity</code> |
| <code>Blindspot: Hidden Biases of Good People by Mahzarin R. Banaji and Anthony G. Greenwald</code> | <code>Books on Mindset</code> |
* Loss: [<code>MultipleNegativesSymmetricRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativessymmetricrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### csv
* Dataset: csv
* Size: 5,333 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 22.26 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 5.83 tokens</li><li>max: 10 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-------------------------------------------------------------------------------------------------------------------|:------------------------------------------|
| <code>Will It Fly?: How to Test Your Next Business Idea So You Don’t Waste Your Time and Money by Pat Flynn</code> | <code>Books on Advertising</code> |
| <code>The Art of Stillness: Adventures in Going Nowhere by Pico Iyer</code> | <code>Books on Spiritual Awakening</code> |
| <code>Just As I Am: A Memoir by Cicely Tyson, Michelle Burford</code> | <code>Books about Misinformation</code> |
* Loss: [<code>MultipleNegativesSymmetricRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativessymmetricrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 8
- `warmup_ratio`: 0.1
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 8
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.1499 | 100 | 3.0137 | - |
| 0.2999 | 200 | 2.3781 | - |
| 0.4498 | 300 | 2.1067 | - |
| 0.5997 | 400 | 2.0142 | - |
| 0.7496 | 500 | 1.9861 | - |
| 0.8996 | 600 | 1.8463 | - |
| 1.0 | 667 | - | 1.7604 |
| 1.0495 | 700 | 1.8115 | - |
| 1.1994 | 800 | 1.7093 | - |
| 1.3493 | 900 | 1.6853 | - |
| 1.4993 | 1000 | 1.702 | - |
| 1.6492 | 1100 | 1.6664 | - |
| 1.7991 | 1200 | 1.6824 | - |
| 1.9490 | 1300 | 1.6174 | - |
| 2.0 | 1334 | - | 1.6624 |
| 2.0990 | 1400 | 1.5585 | - |
| 2.2489 | 1500 | 1.5112 | - |
| 2.3988 | 1600 | 1.5384 | - |
| 2.5487 | 1700 | 1.5013 | - |
| 2.6987 | 1800 | 1.4589 | - |
| 2.8486 | 1900 | 1.5108 | - |
| 2.9985 | 2000 | 1.5287 | - |
| 3.0 | 2001 | - | 1.6140 |
| 3.1484 | 2100 | 1.3973 | - |
| 3.2984 | 2200 | 1.3658 | - |
| 3.4483 | 2300 | 1.4294 | - |
| 3.5982 | 2400 | 1.3957 | - |
| 3.7481 | 2500 | 1.3888 | - |
| 3.8981 | 2600 | 1.4405 | - |
| 4.0 | 2668 | - | 1.6155 |
| 4.0480 | 2700 | 1.3603 | - |
| 4.1979 | 2800 | 1.2872 | - |
| 4.3478 | 2900 | 1.2514 | - |
| 4.4978 | 3000 | 1.3011 | - |
| 4.6477 | 3100 | 1.3175 | - |
| 4.7976 | 3200 | 1.3553 | - |
| 4.9475 | 3300 | 1.3157 | - |
| 5.0 | 3335 | - | 1.6061 |
| 5.0975 | 3400 | 1.2754 | - |
| 5.2474 | 3500 | 1.2315 | - |
| 5.3973 | 3600 | 1.2454 | - |
| 5.5472 | 3700 | 1.2441 | - |
| 5.6972 | 3800 | 1.266 | - |
| 5.8471 | 3900 | 1.2304 | - |
| 5.9970 | 4000 | 1.2717 | - |
| 6.0 | 4002 | - | 1.6100 |
| 6.1469 | 4100 | 1.1706 | - |
| 6.2969 | 4200 | 1.2203 | - |
| 6.4468 | 4300 | 1.1441 | - |
| 6.5967 | 4400 | 1.1895 | - |
| 6.7466 | 4500 | 1.176 | - |
| 6.8966 | 4600 | 1.1903 | - |
| 7.0 | 4669 | - | 1.6341 |
| 7.0465 | 4700 | 1.2028 | - |
| 7.1964 | 4800 | 1.1416 | - |
| 7.3463 | 4900 | 1.1405 | - |
| 7.4963 | 5000 | 1.1454 | - |
| 7.6462 | 5100 | 1.1217 | - |
| 7.7961 | 5200 | 1.1682 | - |
| 7.9460 | 5300 | 1.1582 | - |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 4.1.0
- Transformers: 4.53.0.dev0
- PyTorch: 2.7.1+cu126
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
Trickerfortech/Get-the-Best-Sneaker-Deals-Worldwide-Using-Proxy | Trickerfortech | 2025-06-23T08:03:23Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T07:58:40Z | # Get the Best Sneaker Deals Worldwide Using Proxy
## 👉 [GET SNEAKER DEALS WITH 9PROXY!](https://the9proxy.short.gy/pricing-hugging-james2k4)

Finding rare or limited-edition sneakers at affordable prices can be a challenge. Many sneaker releases are region-restricted or sell out quickly due to high demand. Fortunately, using a proxy can help you access exclusive sales and limited offers across different countries, giving you a better chance to grab cheap sneakers before they’re gone.
## Why Are Sneaker Deals Often Region-Restricted?
Brands and retailers sometimes limit sneaker releases or discounts to specific countries or regions. These geo-restrictions prevent users from other locations from accessing certain deals or participating in exclusive sales events. This means a sneaker available at a discount in one country may be unavailable or more expensive in another.
## How Proxy Helps You Access Sneaker Deals Worldwide
A proxy server changes your visible IP address and makes it appear as if you are browsing from a different country. By using proxies based in countries with better sneaker deals or exclusive releases, you can:
- Access region-restricted sneaker websites or sales
- Avoid IP bans or restrictions during high-demand drops
- Speed up checkout processes by routing through faster servers
Compared to VPNs, proxies can offer faster speeds and more stable connections, which is critical when you need to act quickly to snag limited stock sneakers.
## Choosing the [Right Proxy](https://the9proxy.short.gy/home-hugging-james2k4) for Sneaker Shopping
Not all proxies work well for sneaker hunting. Residential proxies are usually the best choice because they use real IP addresses assigned to home devices, making them harder for sneaker sites to detect and block.
When picking a proxy service, look for these features:
- A large pool of residential IPs across multiple countries to access deals worldwide.
- Fast and stable connections to avoid delays during important sneaker drops.
- Strong privacy protection to keep your real location hidden.
- Good customer support for quick help if needed.
Services like [9Proxy](https://the9proxy.short.gy/home-hugging-james2k4) offer reliable residential proxies with global coverage, making them a solid option for sneaker enthusiasts who want to access exclusive releases and discounts smoothly.
## Unlock Exclusive Sneaker Deals with [9Proxy](https://the9proxy.short.gy/home-hugging-james2k4)
Sneaker deals are often limited by region, making it hard for global buyers to access the best offers. Using a reliable residential proxy service can help you bypass these restrictions, giving you the edge to shop faster and smarter.
If you want to unlock exclusive sneaker releases and access the best deals worldwide, consider trying [9Proxy](https://the9proxy.short.gy/home-hugging-james2k4). With their high-quality residential proxies and global IP coverage, 9Proxy provides the speed and reliability needed for successful sneaker shopping.
[Sign up today for a free trial](https://the9proxy.short.gy/home-hugging-james2k4) and experience firsthand how 9Proxy can improve your online sneaker hunting.
[Choose the right proxy](https://the9proxy.short.gy/home-hugging-james2k4) package that fits your needs and never miss out on your favorite sneakers again.
|
StealthPort/yong-zhu-zhai-dai-li-qiang-quan-qiu-xian-liang-qiu-xie-wo-de-shi-zhan-gong-lue | StealthPort | 2025-06-23T07:49:57Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T07:45:10Z | # 用住宅代理抢全球限量球鞋:我的实战攻略
<a href='https://postimages.org/' target='_blank'><img src='https://i.postimg.cc/nrj0B9vF/684ba6f713f9c335a1f3092f-scaled-cover.jpg' border='0' alt='684ba6f713f9c335a1f3092f-scaled-cover'/></a>
> 想第一时间买到稀有球鞋、又想拿到最低折扣?住宅代理就是你的隐藏神招。
> 🌍 [**探索9Proxy主页**](https://the9proxy.short.gy/huggingface-homepage-lucas888) 开启全球抢鞋模式!
👉 [**点击这里,立即解锁全球内容!**](https://the9proxy.short.gy/huggingface-pricing-lucas888)
## 为什么限量球鞋常被“地区墙”挡住?
品牌方为了制造稀缺感,喜欢把**折扣或首发锁定在特定国家/地区**。同一双鞋在美国可能半价清仓,在其他地区却原价出售;有时官网甚至直接屏蔽海外 IP。没有合适工具,你根本看不到这些隐藏优惠,更别提下单。
---
## 住宅代理如何帮我突破重围?
1. **伪装成目标国家用户**
连接[**9Proxy官网**](https://the9proxy.short.gy/huggingface-homepage-lucas888)的美国或英国住宅 IP,页面瞬间解锁,当地专属折扣一览无余。
2. **避免抢购高峰被封 IP**
热门发售时网站会封锁异常流量。[**安全由9Proxy守护**](https://the9proxy.short.gy/huggingface-homepage-lucas888) 的住宅 IP 分散访问请求,降低被限风控的几率。
3. **加速结账流程**
节点接近目标服务器,支付不掉链子,比 VPN 更稳更快。
4. **多国轮换,随时切换战场**
手动或自动切换 IP,日版、欧版、美版优惠随你挑。
---
## 选代理前必须关注的 4 个指标
| 指标 | 重要性 | 我的经验 |
| ---- | ------ | -------- |
| 住宅 IP 覆盖 | ⭐⭐⭐⭐⭐ | 节点越多,抢鞋机会越高 |
| 速度与稳定 | ⭐⭐⭐⭐⭐ | 缓慢或掉线=失去购买资格 |
| 隐私保护 | ⭐⭐⭐⭐ | 隐藏真实位置,降低封号风险 |
| 客服响应 | ⭐⭐⭐⭐ | 出现支付问题能及时解决 |
我最终锁定 [9Proxy官网](https://the9proxy.short.gy/huggingface-homepage-lucas888),正是因为它在这四项全部拉满。若想价格透明可查,直接看 [**9Proxy价格页面**](https://the9proxy.short.gy/huggingface-pricing-lucas888),套餐清晰不踩雷。
---
## 我的抢鞋成果
- Air Jordan 限量联名:原价秒购
- Yeezy 补货:比国内便宜 40%
- NB Made in USA 特别色:仅北美上线,通过 [**立即探索9Proxy**](https://the9proxy.short.gy/huggingface-homepage-lucas888) 成功下单
如果我当初没有用代理,以上战绩一个都拿不到。
---
## 省钱小贴士
1. 抢购前先在[**9Proxy价格页面**](https://the9proxy.short.gy/huggingface-pricing-lucas888)选流量包,避免高峰期用完流量。
2. 关注[**9Proxy限时折扣**](https://the9proxy.short.gy/huggingface-pricing-lucas888),经常有额外流量或折价码。
3. 抢手鞋款发售前 5 分钟提前连上美国节点,提高成功率。
4. 持续使用住宅 IP,保持账号活跃度,减少风控。
---
## 立即行动,别再错过任何一双心仪球鞋!
- 点击 [**9Proxy官网**](https://the9proxy.short.gy/huggingface-homepage-lucas888) 免费注册
- 在 [**不要错过9Proxy优惠**](https://the9proxy.short.gy/huggingface-pricing-lucas888) 选择适合的套餐
- 配置浏览器或手机,锁定目标国节点
- 打开零点发售页面,下一双限量鞋就是你的!
> 抢鞋比拼的是速度和技巧,工具选对了一半就赢了。现在就 [**立即购买9Proxy**](https://the9proxy.short.gy/huggingface-pricing-lucas888),开启你的全球鞋柜!
|
onnx-community/vitpose-plus-small-ONNX | onnx-community | 2025-06-23T07:49:20Z | 2 | 0 | transformers.js | [
"transformers.js",
"onnx",
"vitpose",
"base_model:usyd-community/vitpose-plus-small",
"base_model:quantized:usyd-community/vitpose-plus-small",
"region:us"
] | null | 2025-06-23T07:49:17Z | ---
library_name: transformers.js
base_model:
- usyd-community/vitpose-plus-small
---
# vitpose-plus-small (ONNX)
This is an ONNX version of [usyd-community/vitpose-plus-small](https://huggingface.co/usyd-community/vitpose-plus-small). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
|
deepmaster/72_48 | deepmaster | 2025-06-23T07:45:45Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-06-23T07:45:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jsevisal/roberta-large-gest-pred-seqeval-partialmatch | Jsevisal | 2025-06-23T07:35:05Z | 69 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:Jsevisal/gesture_pred",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-03-30T09:07:50Z | ---
license: other
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-large-gest-pred-seqeval-partialmatch
results: []
datasets:
- Jsevisal/gesture_pred
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-gest-pred-seqeval-partialmatch
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6296
- Precision: 07789
- Recall: 0.7815
- F1: 0.7741
- Accuracy: 0.8331
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.7374 | 1.0 | 147 | 0.9232 | 0.5124 | 0.4475 | 0.4382 | 0.7460 |
| 0.7794 | 2.0 | 294 | 0.6632 | 0.7150 | 0.6640 | 0.6752 | 0.8063 |
| 0.478 | 3.0 | 441 | 0.6320 | 0.7630 | 0.8022 | 0.7693 | 0.8385 |
| 0.305 | 4.0 | 588 | 0.6296 | 0.7789 | 0.7815 | 0.7741 | 0.8331 |
| 0.1967 | 5.0 | 735 | 0.6706 | 0.7531 | 0.7203 | 0.7187 | 0.8445 |
| 0.137 | 6.0 | 882 | 0.7675 | 0.7634 | 0.6827 | 0.6838 | 0.8458 |
| 0.0896 | 7.0 | 1029 | 0.8077 | 0.7995 | 0.7612 | 0.7559 | 0.8499 |
| 0.0583 | 8.0 | 1176 | 0.8361 | 0.7164 | 0.7296 | 0.6980 | 0.8291 |
| 0.0375 | 9.0 | 1323 | 0.8315 | 0.7769 | 0.7601 | 0.7364 | 0.8546 |
| 0.027 | 10.0 | 1470 | 0.8609 | 0.7683 | 0.7579 | 0.7303 | 0.8452 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
### LICENSE
Copyright (c) 2014, Universidad Carlos III de Madrid. Todos los derechos reservados.
Este software es propiedad de la Universidad Carlos III de Madrid, grupo de investigación Robots Sociales. La Universidad Carlos III de Madrid es titular en exclusiva de los derechos de propiedad intelectual de este software. Queda prohibido cualquier uso indebido o no autorizado, entre estos, a título enunciativo pero no limitativo, la reproducción, fijación, distribución, comunicación pública, ingeniería inversa y/o transformación sobre dicho software, ya sea total o parcialmente, siendo el responsable del uso indebido o no autorizado también responsable de las consecuencias legales que pudieran derivarse de sus actos. |
TechModel/gimana-saya-dapetin-sneaker-eksklusif-dari-jepang-eropa-tanpa-ketinggalan-lagi | TechModel | 2025-06-23T07:30:36Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T07:28:30Z | # Gimana Saya Dapetin Sneaker Eksklusif dari Jepang & Eropa Tanpa Ketinggalan Lagi 👟🌍

**🔥 Mau dapetin sneaker langka tanpa harus bayar mahal ke reseller? Trik ini bikin saya bisa checkout langsung dari situs resmi luar negeri. Coba pakai [9Proxy](https://the9proxy.short.gy/huggingface-homepage-lily555) sekarang!**
## Masalah Utama: Rilisan Sneaker Sering Dikunci Wilayah
Sebagai sneakerhead, saya sering dibuat frustasi. Banyak rilisan dari Nike, Adidas, atau New Balance yang cuma dijual untuk pasar AS, Jepang, atau Eropa. Di Indonesia? Kita cuma bisa lihat postingan IG orang yang udah dapet duluan 😤
Masalahnya? **Geo-blocking**. Situs jualan tau kita dari negara lain dan otomatis blok akses atau checkout.
## Solusinya: Gunakan Proxy untuk Buka Akses Global
Saya mulai pakai proxy buat “menyamarkan” lokasi. Jadi walaupun saya di Jakarta, website ngira saya di Tokyo atau Berlin. Hasilnya?
- Bisa masuk halaman rilis eksklusif
- Checkout cepat tanpa antri
- Gak kena blokir saat traffic tinggi
- Akses promo lokal yang lebih murah
Dan dibanding VPN, proxy itu **lebih ringan dan cepat**, cocok banget buat rebutan flash sale.
## Kenapa Saya Pilih [9Proxy](https://the9proxy.short.gy/huggingface-homepage-lily555)
Saya gak asal pilih proxy. 9Proxy pakai **residential IP**, alias alamat internet asli dari perangkat rumah tangga. Jadi kelihatan natural dan gak ketahuan bot.
### Fitur yang Ngebantu Saya Dapet Sepatu Idaman:
- Ribuan IP dari negara strategis (Jepang, Prancis, Jerman)
- Koneksi stabil & cepat
- IP saya tetap anonim dan aman
- Customer service responsif banget!
Saya pernah setting IP Jerman pas ada drop Puma RS-X — sukses checkout sebelum sold out ✌️
## ✨ Dulu Ketinggalan, Sekarang Selalu Dapat
Sekarang saya udah berhasil dapetin:
- **Nike Dunk eksklusif Jepang**
- **Yeezy Slide dari Prancis**
- **New Balance edisi kolaborasi Jerman**
Semuanya **tanpa bayar mahal ke reseller**, dan beli langsung dari situs resmi. Tanpa proxy kayak [9Proxy](https://the9proxy.short.gy/huggingface-pricing-lily555), mustahil saya bisa akses halaman pembelian mereka.
## Tips Cepat Buat Sneaker Hunt Global:
1. Cek negara rilis sneaker incaran
2. Setting proxy dengan lokasi negara tersebut
3. Siapin akun & metode pembayaran duluan
4. Siap standby pas waktu rilis (selisih detik bisa menentukan!)
**💥 Gak mau ketinggalan rilisan eksklusif lagi? Coba 9Proxy hari ini dan unlock akses global buat sneaker favoritmu!**
👉 [Lihat paket dan promo proxy di sini](https://the9proxy.short.gy/huggingface-pricing-lily555)
|
Legitmainstr/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-placid_peaceful_puffin | Legitmainstr | 2025-06-23T07:22:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am placid peaceful puffin",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T01:15:32Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-placid_peaceful_puffin
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am placid peaceful puffin
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-placid_peaceful_puffin
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Legitmainstr/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-placid_peaceful_puffin", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
dfh55y45/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-agile_darting_ostrich | dfh55y45 | 2025-06-23T07:05:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am agile darting ostrich",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T04:35:53Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-agile_darting_ostrich
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am agile darting ostrich
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-agile_darting_ostrich
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dfh55y45/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-agile_darting_ostrich", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
rwr9857/klue-bert-base-nsmc | rwr9857 | 2025-06-23T06:38:31Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-23T06:38:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mvashisth/2025-jun-22-llama3-2-3b-single-turn-merged-GGUF | mvashisth | 2025-06-23T06:33:33Z | 11 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:mvashisth/2025-jun-22-llama3-2-3b-single-turn-lora-adpater-merged",
"base_model:quantized:mvashisth/2025-jun-22-llama3-2-3b-single-turn-lora-adpater-merged",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T06:28:47Z | ---
base_model: mvashisth/2025-jun-22-llama3-2-3b-single-turn-lora-adpater-merged
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mvashisth
- **License:** apache-2.0
- **Finetuned from model :** mvashisth/2025-jun-22-llama3-2-3b-single-turn-lora-adpater-merged
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jangsukim/final_project_exaone_finetuned | jangsukim | 2025-06-23T06:25:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T02:46:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
IlonaStolana/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pouncing_lumbering_emu | IlonaStolana | 2025-06-23T06:23:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am pouncing lumbering emu",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T23:53:51Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pouncing_lumbering_emu
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am pouncing lumbering emu
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pouncing_lumbering_emu
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="IlonaStolana/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pouncing_lumbering_emu", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
LeeNakyung/klue-bert-base-nsmc2 | LeeNakyung | 2025-06-23T06:19:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-23T06:18:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Zekrompogu/APNRTA15epoch | Zekrompogu | 2025-06-23T06:18:09Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:microsoft/trocr-base-str",
"base_model:finetune:microsoft/trocr-base-str",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-23T06:17:05Z | ---
library_name: transformers
base_model: microsoft/trocr-base-str
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: microsoft/trocr-base-str
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# microsoft/trocr-base-str
This model is a fine-tuned version of [microsoft/trocr-base-str](https://huggingface.co/microsoft/trocr-base-str) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2062
- Cer: 0.2769
- Wer: 0.8917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.17.0
- Tokenizers 0.21.1
|
AlIshaq/E5-faq-pesantren | AlIshaq | 2025-06-23T06:13:57Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:8100",
"loss:MultipleNegativesRankingLoss",
"base_model:intfloat/multilingual-e5-small",
"base_model:finetune:intfloat/multilingual-e5-small",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-06-23T05:57:21Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:8100
- loss:MultipleNegativesRankingLoss
base_model: intfloat/multilingual-e5-small
widget:
- source_sentence: Apa visi dari PPS. Imam Syafi'i?
sentences:
- Ya, ada forum diskusi adab yang dibimbing ustadz setiap pekan.
- Menjadi Madrasah Diniyah yang unggul dalam mewujudkan santri yang bertaqwa, ber-akhlak,
dan kompetitif pada akademik, terutama di bidang tahfizh Qur'an.
- Ya, diadakan rapat guru mingguan dan bulanan.
- source_sentence: Apakah tersedia jalur khusus untuk calon santri berprestasi?
sentences:
- Ya, tersedia jalur prestasi dengan seleksi dan syarat khusus, termasuk beasiswa.
- Barang elektronik selain HP juga dibatasi dan diawasi penggunaannya.
- Ya, akhlak dan adab masuk dalam pelajaran formal dan praktik harian.
- source_sentence: Apakah santri dapat menggunakan laboratorium?
sentences:
- Ya, tersedia nomor hotline khusus pengasuhan dan keamanan.
- Ya, santri jenjang ula, wustha dan ulya memiliki sesi praktik di laboratorium.
- Melalui laporan prestasi dan evaluasi akhlak oleh pembina.
- source_sentence: Bagaimana pesantren merespons pertanyaan mendesak dari wali?
sentences:
- Ya, melalui kegiatan khutbah, bakti sosial, dan ceramah di masyarakat.
- Terdapat reward seperti sertifikat, hadiah, dan rekognisi tahunan.
- Pertanyaan mendesak akan direspons oleh pengasuhan dalam waktu 1x24 jam.
- source_sentence: Apakah ada kegiatan Maulid Nabi atau Isra Mi'raj?
sentences:
- Santri senior membantu junior dalam praktik bahasa harian.
- Ada, biasanya diisi dengan puasa, tahsin, dan murajaah.
- Jadwal disusun oleh bagian akademik agar merata dan efisien.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
model-index:
- name: SentenceTransformer based on intfloat/multilingual-e5-small
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: eval
type: eval
metrics:
- type: pearson_cosine
value: .nan
name: Pearson Cosine
- type: spearman_cosine
value: .nan
name: Spearman Cosine
--- |
mvashisth/2025-jun-22-llama3-2-3b-single-turn-lora-adpater-merged | mvashisth | 2025-06-23T06:10:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T06:00:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Shahzad1/go_emotions_model1 | Shahzad1 | 2025-06-23T06:03:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-23T06:02:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
atufigwege/gemma-lesion-classifier-original | atufigwege | 2025-06-23T06:03:13Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-pt",
"base_model:finetune:google/gemma-3-4b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T14:02:15Z | ---
base_model: google/gemma-3-4b-pt
library_name: transformers
model_name: gemma-lesion-classifier-original
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-lesion-classifier-original
This model is a fine-tuned version of [google/gemma-3-4b-pt](https://huggingface.co/google/gemma-3-4b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="atufigwege/gemma-lesion-classifier-original", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
apriasmoro/ba1b46f3-b318-4b2c-9218-b3889d322fd3 | apriasmoro | 2025-06-23T05:56:44Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"trl",
"grpo",
"unsloth",
"arxiv:2402.03300",
"base_model:unsloth/llama-3-8b",
"base_model:finetune:unsloth/llama-3-8b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T05:47:52Z | ---
base_model: unsloth/llama-3-8b
library_name: transformers
model_name: ba1b46f3-b318-4b2c-9218-b3889d322fd3
tags:
- generated_from_trainer
- axolotl
- trl
- grpo
- unsloth
licence: license
---
# Model Card for ba1b46f3-b318-4b2c-9218-b3889d322fd3
This model is a fine-tuned version of [unsloth/llama-3-8b](https://huggingface.co/unsloth/llama-3-8b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="apriasmoro/ba1b46f3-b318-4b2c-9218-b3889d322fd3", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/apriasmoro-abcstudio/Gradients-On-Demand/runs/5on5kphn)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Savyasaachin/deepseek-coder-7b-instruct-v1.5-Q6_K-GGUF | Savyasaachin | 2025-06-23T05:54:34Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:deepseek-ai/deepseek-coder-7b-instruct-v1.5",
"base_model:quantized:deepseek-ai/deepseek-coder-7b-instruct-v1.5",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-23T05:54:07Z | ---
license: other
license_name: deepseek
license_link: LICENSE
tags:
- llama-cpp
- gguf-my-repo
base_model: deepseek-ai/deepseek-coder-7b-instruct-v1.5
---
# Savyasaachin/deepseek-coder-7b-instruct-v1.5-Q6_K-GGUF
This model was converted to GGUF format from [`deepseek-ai/deepseek-coder-7b-instruct-v1.5`](https://huggingface.co/deepseek-ai/deepseek-coder-7b-instruct-v1.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/deepseek-ai/deepseek-coder-7b-instruct-v1.5) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Savyasaachin/deepseek-coder-7b-instruct-v1.5-Q6_K-GGUF --hf-file deepseek-coder-7b-instruct-v1.5-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Savyasaachin/deepseek-coder-7b-instruct-v1.5-Q6_K-GGUF --hf-file deepseek-coder-7b-instruct-v1.5-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Savyasaachin/deepseek-coder-7b-instruct-v1.5-Q6_K-GGUF --hf-file deepseek-coder-7b-instruct-v1.5-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Savyasaachin/deepseek-coder-7b-instruct-v1.5-Q6_K-GGUF --hf-file deepseek-coder-7b-instruct-v1.5-q6_k.gguf -c 2048
```
|
kartmannXu/Qwen2.5-3B-bl-0.4 | kartmannXu | 2025-06-23T05:33:22Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2bl",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | 2025-06-13T06:12:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Pakcricketinfo-Samiya-Viral-Video-Link/VIDEO.Pakcricketinfo.Samiya.Viral.Video.Tutorial.Official | Pakcricketinfo-Samiya-Viral-Video-Link | 2025-06-23T05:32:05Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T05:29:48Z | [](https://video-tv-go.blogspot.com/2024/11/new-videos-today.html) |
Elsieiiiiiii/why-major-classifier | Elsieiiiiiii | 2025-06-23T05:13:46Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T05:13:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
katanemo/Arch-Router-1.5B.gguf | katanemo | 2025-06-23T04:57:17Z | 219 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"en",
"arxiv:2506.16655",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-1.5B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-30T18:18:40Z | ---
license: other
license_name: katanemo-research
license_link: >-
https://huggingface.co/katanemo/Arch-Router-1.5B.gguf/blob/main/LICENSE
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
# katanemo/Arch-Router-1.5B
## Overview
With the rapid proliferation of large language models (LLMs) -- each optimized for different strengths, style, or latency/cost profile -- routing has become an essential technique to operationalize the use of different models. However, existing LLM routing approaches are limited in two key ways: they evaluate performance using benchmarks that often fail to capture human preferences driven by subjective evaluation criteria, and they typically select from a limited pool of models.
We introduce a preference-aligned routing framework that guides model selection by matching queries to user-defined domains (e.g., travel) or action types (e.g., image editing) -- offering a practical mechanism to encode preferences in routing decisions. Specifically, we introduce Arch-Router, a compact 1.5B model that learns to map queries to domain-action preferences for model routing decisions. Experiments on conversational datasets demonstrate that our approach achieves state-of-the-art (SOTA) results in matching queries with human preferences, outperforming top proprietary models.
This model is described in the paper: https://arxiv.org/abs/2506.16655, and powers [Arch](https://github.com/katanemo/arch) the open-source AI-native proxy for agents to enable preference-based routing in multi-LLM systems in a seamless way.
### How It Works
To support effective routing, Arch-Router introduces two key concepts:
- **Domain** – the high-level thematic category or subject matter of a request (e.g., legal, healthcare, programming).
- **Action** – the specific type of operation the user wants performed (e.g., summarization, code generation, booking appointment, translation).
Both domain and action configs are associated with preferred models or model variants. At inference time, Arch-Router analyzes the incoming prompt to infer its domain and action using semantic similarity, task indicators, and contextual cues. It then applies the user-defined routing preferences to select the model best suited to handle the request.
### Key Features
- **Structured Preference Routing**: Aligns prompt request with model strengths using explicit domain–action mappings.
- **Transparent and Controllable**: Makes routing decisions transparent and configurable, empowering users to customize system behavior.
- **Flexible and Adaptive**: Supports evolving user needs, model updates, and new domains/actions without retraining the router.
- **Production-Ready Performance**: Optimized for low-latency, high-throughput applications in multi-model environments.
# Requirements
The code of Arch-Router-1.5B has been in the Hugging Face `transformers` library and we advise you to install latest version:
```bash
pip install transformers>=4.37.0
```
# How to use
We use the following example to illustrate how to use our model to perform routing tasks. Please note that, our model works best with our provided prompt format.
### Quickstart
````python
import json
from typing import Any, Dict, List
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "katanemo/Arch-Router-1.5B"
model = AutoModelForCausalLM.from_pretrained(
model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Please use our provided prompt for best performance
TASK_INSTRUCTION = """
You are a helpful assistant designed to find the best suited route.
You are provided with route description within <routes></routes> XML tags:
<routes>
\n{routes}\n
</routes>
<conversation>
\n{conversation}\n
</conversation>
"""
FORMAT_PROMPT = """
Your task is to decide which route is best suit with user intent on the conversation in <conversation></conversation> XML tags. Follow the instruction:
1. If the latest intent from user is irrelevant or user intent is full filled, response with other route {"route": "other"}.
2. You must analyze the route descriptions and find the best match route for user latest intent.
3. You only response the name of the route that best matches the user's request, use the exact name in the <routes></routes>.
Based on your analysis, provide your response in the following JSON formats if you decide to match any route:
{"route": "route_name"}
"""
# Define route config
route_config = [
{
"name": "code_generation",
"description": "Generating new code snippets, functions, or boilerplate based on user prompts or requirements",
},
{
"name": "bug_fixing",
"description": "Identifying and fixing errors or bugs in the provided code across different programming languages",
},
{
"name": "performance_optimization",
"description": "Suggesting improvements to make code more efficient, readable, or scalable",
},
{
"name": "api_help",
"description": "Assisting with understanding or integrating external APIs and libraries",
},
{
"name": "programming",
"description": "Answering general programming questions, theory, or best practices",
},
]
# Helper function to create the system prompt for our model
def format_prompt(
route_config: List[Dict[str, Any]], conversation: List[Dict[str, Any]]
):
return (
TASK_INSTRUCTION.format(
routes=json.dumps(route_config), conversation=json.dumps(conversation)
)
+ FORMAT_PROMPT
)
# Define conversations
conversation = [
{
"role": "user",
"content": "fix this module 'torch.utils._pytree' has no attribute 'register_pytree_node'. did you mean: '_register_pytree_node'?",
}
]
route_prompt = format_prompt(route_config, conversation)
messages = [
{"role": "user", "content": route_prompt},
]
input_ids = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt"
).to(model.device)
# 2. Generate
generated_ids = model.generate(
input_ids=input_ids, # or just positional: model.generate(input_ids, …)
max_new_tokens=32768,
)
# 3. Strip the prompt from each sequence
prompt_lengths = input_ids.shape[1] # same length for every row here
generated_only = [
output_ids[prompt_lengths:] # slice off the prompt tokens
for output_ids in generated_ids
]
# 4. Decode if you want text
response = tokenizer.batch_decode(generated_only, skip_special_tokens=True)[0]
print(response)
````
Then you should be able to see the following output string in JSON format:
````python
{"route": "bug_fixing"}
````
To better understand how to create the route descriptions, please take a look at our [Katanemo API](https://docs.archgw.com/guides/llm_router.html).
# License
Katanemo Arch-Router model is distributed under the [Katanemo license](https://huggingface.co/katanemo/Arch-Router-1.5B.gguf/blob/main/LICENSE). |
apriasmoro/4eb3c91b-b01f-4c63-a390-8327829493f8 | apriasmoro | 2025-06-23T04:56:10Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"trl",
"grpo",
"unsloth",
"arxiv:2402.03300",
"base_model:unsloth/llama-3-8b",
"base_model:finetune:unsloth/llama-3-8b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T04:50:51Z | ---
base_model: unsloth/llama-3-8b
library_name: transformers
model_name: 4eb3c91b-b01f-4c63-a390-8327829493f8
tags:
- generated_from_trainer
- axolotl
- trl
- grpo
- unsloth
licence: license
---
# Model Card for 4eb3c91b-b01f-4c63-a390-8327829493f8
This model is a fine-tuned version of [unsloth/llama-3-8b](https://huggingface.co/unsloth/llama-3-8b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="apriasmoro/4eb3c91b-b01f-4c63-a390-8327829493f8", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/apriasmoro-abcstudio/Gradients-On-Demand/runs/60catiqx)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mlfoundations-dev/Qwen2.5-7B-Instruct_openthoughts3_300k_annotated_Qwen3-32B | mlfoundations-dev | 2025-06-23T04:55:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T04:53:11Z | ---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: Qwen2.5-7B-Instruct_openthoughts3_300k_annotated_Qwen3-32B
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-7B-Instruct_openthoughts3_300k_annotated_Qwen3-32B
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/openthoughts3_300k_annotated_Qwen3-32B dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 512
- total_train_batch_size: 512
- total_eval_batch_size: 4096
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.0
|
manglu3935/Chiron-o1-2B | manglu3935 | 2025-06-23T04:53:22Z | 22 | 0 | null | [
"safetensors",
"internvl_chat",
"image-text-to-text",
"conversational",
"custom_code",
"en",
"arxiv:2506.16962",
"base_model:OpenGVLab/InternVL3-2B",
"base_model:finetune:OpenGVLab/InternVL3-2B",
"license:mit",
"region:us"
] | image-text-to-text | 2025-06-08T07:47:30Z | ---
license: mit
language:
- en
base_model:
- OpenGVLab/InternVL3-2B
pipeline_tag: image-text-to-text
---
## 🤔 Model
We introduce Chiron-o1, a new medical MLLM based on a curriculum learning strategy and clinical chain-of-thought data, with robust visual question-answering and generalizable reasoning capabilities.
Code will be available at https://github.com/manglu097/Chiron-o1
We provide an example of pure text reasoning using [transformers](https://huggingface.co/docs/transformers/index). For multimodal tasks, you can refer to the information [here](https://github.com/manglu097/Chiron-o1/blob/main/infer.py).
```python
from transformers import AutoModel, AutoTokenizer
import torch
path = 'manglu3935/Chiron-o1-2B'
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
load_in_8bit=False,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True,
device_map="auto").eval()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
# pure text inference
question = "Which of the following imaging findings is most consistent with a pure arterial malformation (PAM)?\nA) A vascular network connecting arteries and veins with early venous drainage \nB) A dilated, tortuous arterial loop without venous communication \nC) A focal saccular outpouching of a cerebral artery with surrounding edema \nD) A venous varix with adjacent arterial feeders\nLet's reason step-by-step to answer the above question."
generation_config = dict(max_new_tokens=1024, do_sample=True)
response = model.chat(tokenizer, None, question, generation_config)
print(f'User: {question}\nAssistant: {response}')
```
## 📖 Citation
```
@article{sun2025enhancingstepbystepverifiablemedical,
title={Enhancing Step-by-Step and Verifiable Medical Reasoning in MLLMs},
author={Haoran Sun and Yankai Jiang and Wenjie Lou and Yujie Zhang and Wenjie Li and Lilong Wang and Mianxin Liu and Lei Liu and Xiaosong Wang},
journal={arXiv preprint arXiv:2506.16962},
year={2025}
}
``` |
dayyanj/dj-ai-asr-grammar-corrector-small | dayyanj | 2025-06-23T04:51:48Z | 4 | 0 | null | [
"safetensors",
"t5",
"ASR,",
"text2text-generation",
"en",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:mit",
"region:us"
] | text2text-generation | 2025-06-18T02:58:04Z | ---
license: mit
language:
- en
base_model:
- google-t5/t5-small
pipeline_tag: text2text-generation
tags:
- ASR,
---
# DJ-AI ASR Grammar Corrector (T5-Small)
A lightweight grammar correction model fine-tuned from `t5-small`, specifically designed to correct common errors in **automatic speech recognition (ASR)** outputs — including homophones, verb tense issues, contractions, duplicated words, and more. Optimized for **fast inference** in (near) real-time ASR pipelines.
---
## Model Details
- **Base model**: [`t5-small`](https://huggingface.co/t5-small)
- **Fine-tuned on**: 90 million synthetic (noisy → clean) sentence pairs
- **Training objective**: Correct ASR-style transcription errors into clean, grammatical English
- **Token count**: ~60 million tokens per epoch
- **Framework**: Hugging Face Transformers + PyTorch
---
## Benchmark Results
| Model | Type | Precision | Latency (s/sample) | VRAM (MB) | BLEU | ROUGE-L | Accuracy (%)¹ | Token Accuracy (%)² | Size (MB) |
|--------------------------------------|------|-----------|--------------------|-----------|-------|---------|----------------|----------------------|-----------|
| dj-ai-asr-grammar-corrector-t5-small | HF | fp32 | 0.1151 | 24.98 | 78.92 | 90.31 | 44.62 | 90.39 | 5956.76 |
| dj-ai-asr-grammar-corrector-t5-base | HF | fp32 | 0.0648 | 6.27 | 76.47 | 89.54 | 39.59 | 88.76 | 1620.15 |
1. Accuracy is a measure of how well the model performs across the full sentence. That is, a prediction is only counted as "correct" if the entire corrected sentence exactly matches the reference sentence. So if the model corrects 1 out of 2 errors, but the final output does not exactly match the expected sentence, it's counted as a fail.
2. Token Accuracy is a measure of how well the model performs at the token level.
$$\text{Token Accuracy (\%)} = \left( \frac{\text{Number of Matched Tokens}}{\text{Total Reference Tokens}} \right) \times 100$$
## Intended Use
| Use Case | ✅ Supported | 🚫 Not Recommended |
|----------|--------------|--------------------|
| Post-ASR correction | ✅ Yes | |
| Real-time ASR pipelines | ✅ Yes | |
| Batch transcript cleanup | ✅ Yes | |
| Grammar education tools | ✅ Yes | |
| Formal document editing | 🚫 | Model may be too informal |
| Multilingual input | 🚫 | English-only fine-tuning |
---
## Corrects Common ASR Errors:
- Homophone mistakes (`their` → `they're`)
- Subject-verb disagreement (`he go` → `he goes`)
- Verb tense corruption (`i seen` → `i saw`)
- Missing auxiliaries (`you going` → `are you going`)
- Contraction normalization (`she is not` → `she isn't`)
- Repeated words (`i i want` → `i want`)
- Misused articles/prepositions/pronouns
---
## Example
DEMO: https://huggingface.co/spaces/dayyanj/dj-ai-asr-grammar-corrector-demo
**Input (noisy ASR)**:
Git Repository: https://github.com/dayyanj/DJ-AI-ASR-GRAMMAR-CORRECTOR |
kyx0r/Neona-12B | kyx0r | 2025-06-23T04:47:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-22T22:56:18Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# Neona-12B

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [NearSwap](https://huggingface.co/alchemonaut/QuartetAnemoi-70B-t0.0001) merge method using [yamatazen/NeonMaid-12B-v2](https://huggingface.co/yamatazen/NeonMaid-12B-v2) as a base.
### Models Merged
The following models were included in the merge:
* [yamatazen/LorablatedStock-12B](https://huggingface.co/yamatazen/LorablatedStock-12B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ../LorablatedStock-12B-frank
merge_method: nearswap
base_model: ../NeonMaid-12B-v2-frank
parameters:
t: [0.0005, 0.0008, 0.0013, 0.0008, 0.0005]
dtype: bfloat16
chat_template: "chatml"
tokenizer:
source: "base"
```
|
S-Sethisak/xls-r-300m-km | S-Sethisak | 2025-06-23T04:45:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-06-22T19:02:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jmalegni/Durandal-SLM | jmalegni | 2025-06-23T04:30:43Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"text-generation",
"conversational",
"base_model:microsoft/Phi-4-mini-instruct",
"base_model:adapter:microsoft/Phi-4-mini-instruct",
"license:mit",
"region:us"
] | text-generation | 2025-06-23T02:17:08Z | ---
base_model: microsoft/Phi-4-mini-instruct
library_name: peft
model_name: durandal-phi4-mini
tags:
- generated_from_trainer
- sft
- trl
pipeline_tag: text-generation
license: mit
inference: true
---
---
# Model Card for durandal-phi4-mini
This model is a fine-tuned version of [microsoft/Phi-4-mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- PEFT 0.15.2
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
BootesVoid/cmc8k8ful0cvpbfifsecgwcuw_cmc8kcw8l0cw6bfifh8jdgfxi | BootesVoid | 2025-06-23T04:26:00Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-23T04:25:59Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: IRISSNOVA10
---
# Cmc8K8Ful0Cvpbfifsecgwcuw_Cmc8Kcw8L0Cw6Bfifh8Jdgfxi
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `IRISSNOVA10` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "IRISSNOVA10",
"lora_weights": "https://huggingface.co/BootesVoid/cmc8k8ful0cvpbfifsecgwcuw_cmc8kcw8l0cw6bfifh8jdgfxi/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc8k8ful0cvpbfifsecgwcuw_cmc8kcw8l0cw6bfifh8jdgfxi', weight_name='lora.safetensors')
image = pipeline('IRISSNOVA10').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc8k8ful0cvpbfifsecgwcuw_cmc8kcw8l0cw6bfifh8jdgfxi/discussions) to add images that show off what you’ve made with this LoRA.
|
henomoto/furigana_whisper_small_jsut | henomoto | 2025-06-23T04:22:11Z | 18 | 0 | null | [
"safetensors",
"whisper",
"automatic-speech-recognition",
"ja",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"region:us"
] | automatic-speech-recognition | 2025-06-19T07:03:50Z | ---
language:
- ja
base_model:
- openai/whisper-small
pipeline_tag: automatic-speech-recognition
---
## 概要
- 日本語の音声ファイルに対して、書記素列(漢字仮名交じり文)をプロンプトに入れることで、書記素列と整合性のあるモーラ列(カタカナ列)を出力するモデルです。
- 以下の記事を参考にしてください: TODO
## 使用方法
```python
from transformers import pipeline
from pathlib import Path
pipe = pipeline(
"automatic-speech-recognition",
model="henomoto/furigana_whisper_small_jsut",
)
def transcribe_with_prompt(pipe, audio_path: str | Path, prompt: str) -> str:
prompt_ids = pipe.tokenizer.get_prompt_ids(
prompt, return_tensors="pt"
).to(pipe.device)
generate_kwargs = {"prompt_ids": prompt_ids}
result = pipe(str(audio_path), generate_kwargs=generate_kwargs)
return result["text"]
```
## 注意
- 音声の長さは30秒以下でないとうまく動きません。
- 公開しているsmallモデルはそこまで精度が良いとは言えず、G2Pマッチ率(データセットに対してフィルタリングを行った後に残るデータ量)が40%程度となっています。より精度の高いモデルを使いたい方はデータを揃え、ベースモデルもwhisper-smallより大きいモデルにして自分で学習を行うことをおすすめします。
- 学習データでのプロンプトは、全て「句読点が、。のみ」「最後に必ず。が付く」と正規化されています。よって、与えるプロンプトも同様の形式にしたほうが精度が高くなります。 |
Hachipo/Meta-Llama-3-8B-MIFT-en_newbase_v2 | Hachipo | 2025-06-23T04:13:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-22T13:28:05Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
New-Tutorial-videos-shubhra-jha-viral-Clip/FULL.VIDEO.shubhra.jha.Viral.Video.Tutorial.Official | New-Tutorial-videos-shubhra-jha-viral-Clip | 2025-06-23T04:12:58Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T04:12:46Z | 01 seconds ago
[🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶](https://sahabagi-mgi.blogspot.com/p/heres-now.html)
[🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 FREE](https://sahabagi-mgi.blogspot.com/p/heres-now.html)
<a href="https://sahabagi-mgi.blogspot.com/p/heres-now.html" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
najmharani/gemma-1b-biography_text_segment_only | najmharani | 2025-06-23T04:04:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"trl",
"en",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T04:04:36Z | ---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** najmharani
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Qwen2.5-7B-base-french-bespoke-stratos-full-sft-GGUF | mradermacher | 2025-06-23T04:00:08Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:joshbarua/Qwen2.5-7B-base-french-bespoke-stratos-full-sft",
"base_model:quantized:joshbarua/Qwen2.5-7B-base-french-bespoke-stratos-full-sft",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T21:41:32Z | ---
base_model: joshbarua/Qwen2.5-7B-base-french-bespoke-stratos-full-sft
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/joshbarua/Qwen2.5-7B-base-french-bespoke-stratos-full-sft
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-7B-base-french-bespoke-stratos-full-sft-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-base-french-bespoke-stratos-full-sft-GGUF/resolve/main/Qwen2.5-7B-base-french-bespoke-stratos-full-sft.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-base-french-bespoke-stratos-full-sft-GGUF/resolve/main/Qwen2.5-7B-base-french-bespoke-stratos-full-sft.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-base-french-bespoke-stratos-full-sft-GGUF/resolve/main/Qwen2.5-7B-base-french-bespoke-stratos-full-sft.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-base-french-bespoke-stratos-full-sft-GGUF/resolve/main/Qwen2.5-7B-base-french-bespoke-stratos-full-sft.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-base-french-bespoke-stratos-full-sft-GGUF/resolve/main/Qwen2.5-7B-base-french-bespoke-stratos-full-sft.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-base-french-bespoke-stratos-full-sft-GGUF/resolve/main/Qwen2.5-7B-base-french-bespoke-stratos-full-sft.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-base-french-bespoke-stratos-full-sft-GGUF/resolve/main/Qwen2.5-7B-base-french-bespoke-stratos-full-sft.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-base-french-bespoke-stratos-full-sft-GGUF/resolve/main/Qwen2.5-7B-base-french-bespoke-stratos-full-sft.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-base-french-bespoke-stratos-full-sft-GGUF/resolve/main/Qwen2.5-7B-base-french-bespoke-stratos-full-sft.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-base-french-bespoke-stratos-full-sft-GGUF/resolve/main/Qwen2.5-7B-base-french-bespoke-stratos-full-sft.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-base-french-bespoke-stratos-full-sft-GGUF/resolve/main/Qwen2.5-7B-base-french-bespoke-stratos-full-sft.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-base-french-bespoke-stratos-full-sft-GGUF/resolve/main/Qwen2.5-7B-base-french-bespoke-stratos-full-sft.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Hot-New-video-beckli-com-ananya-viral-Clip/FULL.VIDEO.beckli.com.ananya.Viral.Video.Tutorial.Official | Hot-New-video-beckli-com-ananya-viral-Clip | 2025-06-23T03:59:21Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T03:58:59Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/3myjh3p6?new-leaked-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
online-pro/Msbreewc-x-Ello-MG-5-Jam-7-Menit-Viral-Video | online-pro | 2025-06-23T03:59:01Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T03:58:08Z | [](https://tinyurl.com/5y2uwzuz) |
lora456/dayah | lora456 | 2025-06-23T03:58:41Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-23T03:58:11Z | ---
license: creativeml-openrail-m
---
|
Msbreewc-Ello-MG-5-Jam-7-Menit/Msbreewc.x.Ello.MG.5.Jam.7.Menit.Viral.Video.Full.HD.TRENDING | Msbreewc-Ello-MG-5-Jam-7-Menit | 2025-06-23T03:51:00Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T03:49:37Z | [](https://tinyurl.com/5y2uwzuz) |
AndreLaurin-cyber/Aura-4B-rk3588-1.1.2 | AndreLaurin-cyber | 2025-06-23T03:33:40Z | 0 | 0 | null | [
"safetensors",
"llama",
"en",
"dataset:Mielikki/Erebus-87k",
"dataset:FourOhFour/Instruct_Phase",
"dataset:FourOhFour/RP_Phase",
"dataset:anthracite-core/full-opus-chosen-hermes-rejected-kto-v1",
"base_model:IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml",
"base_model:finetune:IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml",
"license:apache-2.0",
"region:us"
] | null | 2025-06-23T03:33:38Z | ---
base_model:
- IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml
datasets:
- Mielikki/Erebus-87k
- FourOhFour/Instruct_Phase
- FourOhFour/RP_Phase
- anthracite-core/full-opus-chosen-hermes-rejected-kto-v1
language:
- en
license: apache-2.0
---
# Aura-4B-RK3588-1.1.2
This version of Aura-4B has been converted to run on the RK3588 NPU using ['w8a8'] quantization.
This model has been optimized with the following LoRA:
Compatible with RKLLM version: 1.1.2
## Useful links:
[Official RKLLM GitHub](https://github.com/airockchip/rknn-llm)
[RockhipNPU Reddit](https://reddit.com/r/RockchipNPU)
[EZRKNN-LLM](https://github.com/Pelochus/ezrknn-llm/)
Pretty much anything by these folks: [marty1885](https://github.com/marty1885) and [happyme531](https://huggingface.co/happyme531)
Converted using https://github.com/c0zaut/ez-er-rkllm-toolkit
# Original Model Card for base model, Aura-4B, below:
## Aura-4B

## Introduction
**Aura-4B** is a state of the art dedicated roleplaying model designed to fulfill your every desire.
This finetune has seen several hundreds of millions of tokens of completion, instruction and roleplaying data. A Kahneman-Tversky Optimization was applied to give this model a unique output style.
Developed by **Aura Industries**, with contributions from **Anthracite Org**
## Model Details
- **Model Name**: Aura-4B
- **Base Model**: [IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml](https://huggingface.co/IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml)
- **Model Type**: Chat Completions
- **Prompt Format**: ChatML
- **License**: Apache-2.0
- **Language**: English
- **Max Context**: 8,192+ tokens
## License
This model is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
## Quantizations
[Static GGUF](https://huggingface.co/mradermacher/Aura-4B-GGUF)
[Imatrix GGUF](https://huggingface.co/mradermacher/Aura-4B-i1-GGUF)
[EXL2](https://huggingface.co/NewEden/Aura-4B-EXL2)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Coming soon...
| Metric |Value|
|-------------------|----:|
|Avg. | N/A|
|IFEval (0-Shot) | N/A|
|BBH (3-Shot) | N/A|
|MATH Lvl 5 (4-Shot)| N/A|
|GPQA (0-shot) | N/A|
|MuSR (0-shot) | N/A|
|MMLU-PRO (5-shot) | N/A|
## Training Configuration
<details><summary>Click here for Axolotl configs</summary>
Completion SFT
```yaml
base_model: IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
hub_model_id: jeiku/completion4B
hub_strategy: "all_checkpoints"
push_dataset_to_hub:
hf_use_auth_token: true
datasets:
- path: Mielikki/Erebus-87k
type: completion
field: body
shuffle_merged_datasets: true
val_set_size: 0.0025
output_dir: ./outputs/out
adapter:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
sequence_len: 8192
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true
wandb_project: EXP4B
wandb_entity:
wandb_watch:
wandb_name: EXP4B
wandb_log_model:
gradient_accumulation_steps: 12
micro_batch_size: 3
num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.00001
weight_decay: 0.05
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_ratio: 0.1
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
fsdp:
fsdp_config:
special_tokens:
pad_token: <|finetune_right_pad_id|>
```
Instruct SFT
```yaml
base_model: jeiku/completion4B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
hub_model_id: jeiku/instructered4B
hub_strategy: "all_checkpoints"
push_dataset_to_hub:
hf_use_auth_token: true
datasets:
- path: FourOhFour/Instruct_Phase
type: sharegpt
conversation: chatml
chat_template: chatml
shuffle_merged_datasets: true
val_set_size: 0.0025
output_dir: ./outputs/out
adapter:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
sequence_len: 8192
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true
wandb_project: EXP4B
wandb_entity:
wandb_watch:
wandb_name: EXP4B
wandb_log_model:
gradient_accumulation_steps: 12
micro_batch_size: 3
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.00001
weight_decay: 0.05
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_ratio: 0.1
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 2
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
fsdp:
fsdp_config:
special_tokens:
pad_token: <|finetune_right_pad_id|>
```
Roleplaying SFT
```yaml
base_model: jeiku/instructered4B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
hub_model_id: jeiku/TheBest4B
hub_strategy: "all_checkpoints"
push_dataset_to_hub:
hf_use_auth_token: true
datasets:
- path: FourOhFour/RP_Phase
type: sharegpt
conversation: chatml
chat_template: chatml
shuffle_merged_datasets: true
val_set_size: 0.0025
output_dir: ./outputs/out
adapter:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
sequence_len: 8192
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true
wandb_project: EXP4B
wandb_entity:
wandb_watch:
wandb_name: EXP4B
wandb_log_model:
gradient_accumulation_steps: 12
micro_batch_size: 3
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.00001
weight_decay: 0.05
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_ratio: 0.1
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 2
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
fsdp:
fsdp_config:
special_tokens:
pad_token: <|finetune_right_pad_id|>
```
KTO
```yaml
base_model: FourOhFour/Crispy_Crab_4B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
hub_model_id: jeiku/aura4bkto
hub_strategy: "all_checkpoints"
push_dataset_to_hub:
hf_use_auth_token: true
chat_template: chatml
rl: kto
rl_beta: 0.2
kto_desirable_weight: 0.2
datasets:
- path: anthracite-core/full-opus-chosen-hermes-rejected-kto-v1
type: chatml.argilla
shuffle_merged_datasets: true
val_set_size: 0.0
output_dir: ./outputs/out
sequence_len: 8192
sample_packing: false
eval_sample_packing: false
pad_to_sequence_len: false
wandb_project: Aura-4B
wandb_entity:
wandb_watch:
wandb_name: Aura-4B
wandb_log_model:
gradient_accumulation_steps: 16
micro_batch_size: 2
num_epochs: 2
max_steps: 500
optimizer: adamw_8bit
lr_scheduler: cosine
learning_rate: 0.00001
weight_decay: 0.05
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
remove_unused_columns: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 2
eval_table_size:
eval_max_new_tokens:
saves_per_epoch: 1
debug:
deepspeed:
fsdp:
fsdp_config:
fsdp:
fsdp_config:
special_tokens:
pad_token: <|finetune_right_pad_id|>
```
</details><br> |
zeng9977x/qwen-coder-adapter | zeng9977x | 2025-06-23T03:21:05Z | 0 | 1 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-23T00:32:08Z | ---
license: apache-2.0
---
|
underscore2/llama3-8b-bluesky-tpot-v7 | underscore2 | 2025-06-23T03:13:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T03:12:54Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** underscore2
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
veddhanth/lora-trained-xl-stage-2-pretrained-enc-v2-0.25-spat-0.6-map-11-mockingbird | veddhanth | 2025-06-23T03:11:51Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-06-23T03:04:39Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of sks sneaker
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-stage-2-pretrained-enc-v2-0.25-spat-0.6-map-11-mockingbird
<Gallery />
## Model description
These are veddhanth/lora-trained-xl-stage-2-pretrained-enc-v2-0.25-spat-0.6-map-11-mockingbird LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks sneaker to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](veddhanth/lora-trained-xl-stage-2-pretrained-enc-v2-0.25-spat-0.6-map-11-mockingbird/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
RaghavendraSqwish/qwen_orpo_rank32_8000_dataset | RaghavendraSqwish | 2025-06-23T03:11:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"orpo",
"conversational",
"en",
"base_model:unsloth/Qwen3-0.6B",
"base_model:finetune:unsloth/Qwen3-0.6B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T03:11:16Z | ---
base_model: unsloth/Qwen3-0.6B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- orpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** RaghavendraSqwish
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-0.6B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
openbmb/RLPR-Gemma2-2B-it | openbmb | 2025-06-23T03:10:03Z | 0 | 2 | null | [
"safetensors",
"gemma2",
"en",
"dataset:openbmb/RLPR-train",
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T11:58:18Z | ---
license: apache-2.0
datasets:
- openbmb/RLPR-train
language:
- en
---
# Model Card for RLPR-Gemma2-2B-it
[GitHub](https://github.com/openbmb/RLPR) | [Paper](https://github.com/OpenBMB/RLPR/blob/main/RLPR_paper.pdf)
**RLPR-Gemma2-2B-it** is trained from Gemma2-2B-it with the [RLPR](https://github.com/openbmb/RLPR) framework, which eliminates reliance on external verifiers and is simple and generalizable for more domains.
## Model Details
### Key Features
* 💡 **Verifier-Free Reasoning Enhancement:** RLPR pioneers reinforcement learning for reasoning tasks by leveraging the LLM's intrinsic generation probability as a direct reward signal. This eliminates the need for external verifiers and specialized fine-tuning, offering broad applicability and effectively handling complex, diverse answers.
* 🛠️ **Innovative Reward & Training Framework:**
* Features a robust **Probability-based Reward (PR)** using average decoding probabilities of reference answers for higher quality, debiased reward signals, outperforming naive sequence likelihood.
* Implements an **standard deviation filtering** mechanism that dynamically filters prompts to stabilize training and significantly boost final performance.
* 🚀 **Strong Performance in General & Mathematical Reasoning:** Demonstrates substantial reasoning improvements across diverse benchmarks, surpassing the RLVR baseline for 1.4 average points across seven benchmarks.

### Model Description
- **Trained from model:** [Gemma2-2B-it](https://huggingface.co/google/gemma-2-2b-it)
- **Trained on data:** [RLPR-Train-Dataset](https://huggingface.co/datasets/openbmb/RLPR-Train-Dataset)
## Usage
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
## Citation
If you find our model/code/paper helpful, please consider citing our papers 📝:
```bibtex
@article{yu2025rlpr,
title={RLPR: Extrapolating RLVR to General Domains without Verifiers},
author={Yu, Tianyu and Ji, Bo and Wang, Shouli and Yao, Shu and Wang, Zefan and Cui, Ganqu and Yuan, Lifan and Ding, Ning and Yao, Yuan and Liu, Zhiyuan and Sun, Maosong and Chua, Tat-Seng},
journal={arXiv preprint arXiv:2506.xxxxx},
year={2025}
}
``` |
kunit17/mergedShart | kunit17 | 2025-06-23T02:52:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"csm",
"text-to-audio",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/csm-1b",
"base_model:finetune:unsloth/csm-1b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-06-23T02:50:50Z | ---
base_model: unsloth/csm-1b
tags:
- text-generation-inference
- transformers
- unsloth
- csm
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** kunit17
- **License:** apache-2.0
- **Finetuned from model :** unsloth/csm-1b
This csm model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AmberYifan/llama3-8b-full-pretrain-junk-tweet-1m-en-sft | AmberYifan | 2025-06-23T02:47:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:AmberYifan/llama3-8b-full-pretrain-junk-tweet-1m-en",
"base_model:finetune:AmberYifan/llama3-8b-full-pretrain-junk-tweet-1m-en",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T02:04:55Z | ---
library_name: transformers
license: llama3
base_model: AmberYifan/llama3-8b-full-pretrain-junk-tweet-1m-en
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: llama3-8b-full-pretrain-junk-tweet-1m-en-sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-8b-full-pretrain-junk-tweet-1m-en-sft
This model is a fine-tuned version of [AmberYifan/llama3-8b-full-pretrain-junk-tweet-1m-en](https://huggingface.co/AmberYifan/llama3-8b-full-pretrain-junk-tweet-1m-en) on the alpaca_en_demo dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
neta-art/Neta-Lumina | neta-art | 2025-06-23T02:46:40Z | 0 | 1 | null | [
"license:other",
"region:us"
] | null | 2025-06-23T02:46:40Z | ---
license: other
license_name: fair-ai-public-license-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
---
|
Subsets and Splits